21 November 2024
The Coming Wave
AI, Power and the 21st Century’s Greatest Dilemma
Mustafa Suleyman and Michael Bhaskar
2023, Penguin, 332 pages,
ISBN 9781847927484
Reviewer: Kevin Gardiner, Rothschild & Co

“In the annals of human history, there are moments that stand out as turning points, where the fate of humanity hangs in the balance… And now we stand at the brink of another such moment as we face the rise of a coming wave of technology that includes both advanced AI and biotechnology… we are faced with a choice – a choice between a future of unparalleled possibility and a future of unimaginable peril…”
The authors, a hugely successful technology entrepreneur and his writer, tell us that the book’s prologue, which provides this extract, was written by an AI. I was relieved.
It’s mostly a style thing. You have to admire Suleyman’s business achievements, and I think there is valuable and important content in his book. For example: a refreshing acceptance that growth remains possible, and a rejection of the naïve Malthusian worldview; a recognition of the increasingly intangible nature of output; an awareness of the role to be played by technology in tackling our energy challenge; and (less positively) a reminder of just how fragile many of today’s institutions are in the face of rapid innovation and cyber insecurity.
Unfortunately, the narrative is repetitive, and the prose manages to be both platitudinous and sensationalist (just like the AI’s). The book has no tables or charts, and little analysis or reasoning. Instead, it contains countless unsubstantiated assertions, many of which are virtually meaningless (for example: “The coming wave represents the greatest economic prize in history. It is a consumer cornucopia and potential profit centre without parallel” (p134)). He cites forecasts from management consultants without irony.
The book’s core message as well as its tone is conveyed by that extract above. Suleyman believes that a combination of AI and biotechnology poses an urgent and potentially existential challenge. He sees the pace of innovation accelerating, and humans eventually losing control unless they act decisively, now.
Accelerating change is a cliché, and speed in this context is of course impossible to measure – where are the personal flying machines (or robot plumbers)? Meanwhile, Suleyman may not realise that his undoubted industry expertise might lead him astray in a wider context: his thinking is not as joined-up as it could be.
For example, he notes the many jobs at risk from technology (“The number of people who can get a PhD in machine learning will remain tiny in comparison to the scale of layoffs”, p180), and later discusses the Luddites (pp281-286). But he also notes how technology is fostering public dematerialisation and the growth of (employment-intensive) services (“Meeting demand for cheap and seamless services usually requires scale (massive up-front investment in chips, people…)”, (p190)).
A little macro awareness might have encouraged him to question what sort of world we will be living in anyway if robots are doing all the work…
I found the sections dealing with the need to manage innovation more compelling, and while I don’t see technology sweeping all before it in the way that he does, his recommendations for containment make sense. New technology can pose potential public goods problems, and there is a case for some (ideally international) government regulation. I like his idea of a global AI non-proliferation pact but can’t see one arriving soon.
Finally, Suleyman is of course aware that today’s AI is not AGI (Artificial General Intelligence). He sidesteps the debate about that, and about consciousness:
“For the time being, it doesn’t matter whether the system is self-aware, or has understanding, or has humanlike intelligence. All that matters is what the system can do.” (p75)
So what can it do? Today’s Artificial Intelligence identifies, creates and modifies patterns in big data super quickly. It can work with numbers and now natural language (the source of current excitement), and with digital and analog inputs. The output might be a medical diagnosis, a gene sequence, a story, code, a literature review, music, a pleasing image, the operation of a machine. A user interacting with it might believe they are dealing with another person, and our AI might thereby pass the Turing test for machine intelligence.
But today’s AI still needs to be told the rules of the game, what we might consider useful patterns to be. It may calculate quickly, but it needs to know what sort of calculations to do. Meanwhile, its iterations consume (waste?) a lot of energy, and although there are already countless piecemeal applications for it, as with other general purpose technologies it may take many years for its wider productive potential to be apparent.
I think that debate is the more pressing one:
“Intelligence is at risk… of being replaced by just very rapidly and mechanically following every kind of pathway that a computer can find, but having no sense of a context in which to properly evaluate these things…” Iain McGilchrist, in conversation at Perspectiva/YouTube (2023), punctuation added.