As befits a writer whose breakout work, Sapiens, was a history of the entire human race, Yuval Noah Harari is a master of the sententious generalisation. “Human life,” he writes here, “is a balancing act between endeavouring to improve ourselves and accepting who we were.” Is it? Is that all it is? Elsewhere, one might be surprised to read: “The ancient Romans had a clear understanding of what democracy means.” No doubt the Romans would have been happy to hear that they would, 2,000 years in the future, be given a gold star for their comprehension of eternally stable political concepts by Yuval Noah Harari.
In his 2018 book, 21 Lessons for the 21st Century, Harari wrote: “Liberals don’t understand how history deviated from its preordained course, and they lack an alternative prism through which to interpret reality. Disorientation causes them to think in apocalyptic terms.” It seems that, in the intervening years, Harari has himself become a liberal, because this book is about the apocalyptic scenario of how the “computer network” – everything from digital surveillance capitalism to social feed algorithms and AI – might destroy civilisation and usher in “the end of human history”. Take that, Fukuyama.
Like Malcolm Gladwell, Harari has a passionate need to be seen to overturn received wisdom. Many people think, for example, that the printing press made a crucial contribution to the emergence of modern science. Not so, insists Harari: after all, printing equally enabled the dissemination of fake news, such as books about witches, and so Gutenberg is partly to blame for the gruesome torture and murder of those accused of witchcraft across Europe. Silly as that might sound, it also misses the fundamental point: because the scientific method is accretional, modern science could only come into being once the results of previous experimenters were widely available to those who followed them. Only via the ladder of print could early-modern scientists stand on the shoulders of giants.
But perhaps I have fallen prey to what Harari dubs “the naive view of information”, which subtly changes throughout the book as rhetorical circumstances demand until it is something of a straw Frankenstein’s monster. The naive view of information encompasses the idea that “[it] is essentially a good thing, and the more we have of it, the better”, which lots of people believe and is hard to argue with, but it also supposedly holds that sufficient information leads ineluctably to political wisdom and that the free flow of information inevitably leads to truth, propositions that almost no one believes. “Knowing that e=mc2 usually doesn’t resolve political disagreements,” Harari says, to no one.
We know this already from the history of dictatorial and totalitarian governments and their attempts at information gathering and control, a history from which Harari draws dozens of colourfully chilling anecdotes to persuade the reader of the falsity of a patently ridiculous view.
What, then, can modern computers do that should worry us so much? Harari is peculiarly credulous about the capabilities of what is now marketed as “AI”. No one has yet seen a chatbot create new ideas, as Harari thinks they can, let alone generate art that is not simply a probabilistic recombination of patterns in its training data. (“Computers can make cultural innovations,” he writes in one of many passages besides which I scrawled “citation needed”.) At some point in the future, meanwhile, what Harari calls “our new AI overlords” are apparently sure to acquire scary godlike powers. In his crystal ball, an AI overlord could decide to engineer a new pandemic virus, or a new kind of money, while flooding the world’s information networks with fake news or incitements to riot.
The descent prophesied herein of a “Silicon Curtain”, meanwhile, is not a problem of AI per se but of geopolitics: extrapolating from the Great Firewall of China, which prevents most Chinese citizens from accessing sites such as Google and Wikipedia, Harari supposes that, in time, Chinese and US computer systems might be completely prevented from interoperating or even communicating with one another, so “ending the idea of a single shared human reality”. Worrying if true. Vladimir Putin’s violent irredentism in Ukraine is, Harari notes, in part inspired by his belief in a partisan version of Russian history, which shows the danger of a lack of shared myths. This, however, is a feature of almost all wars since the dawn of time, so I am not sure we can blame the computers for it.
So what can we do to save human civilisation and our shared reality? Simple, concludes Harari: subject algorithms and AI to strong official regulation, and focus on “building institutions with strong self-correcting mechanisms”. So, carry on being liberal democracies? It’s a wan sort of conclusion to a book that has struck such an end-of-days tone.
The annoying thing is that Nexus also contains many too-brief but fascinating discussions on subjects ranging from the process by which the books that comprise the modern Bible were canonised, or the role of Facebook’s “news feed” in fomenting the massacres in Myanmar of 2016-17, to the facial recognition system used by Iran to detect unveiled women.
There are a brilliant few pages, in particular, on the plight of Jews in fascist Romania, including Harari’s own grandfather, who in 1938 were forced to provide papers proving their right to citizenship that in many cases had been destroyed by municipal authorities. When Harari is not in his mode of oracular pontification, he can be a superb narrative writer. But oracular pontification is what, we must assume, his readers want.