A sorcerer’s apprentice decides to use magic to help clean his master’s castle. The broom he enchants works well, dousing the floors with pails full of water. When the work is finished, the apprentice tries to stop the broom. Then, he tries to smash the broom. But the broom simply splits and regrows, working twice as hard as before, four times as hard, eight times as hard… until the rooms are awash and the apprentice all but drowns.
I wonder if Johann Wolfgang von Goethe’s 1797 poem sprang to mind as Mustafa Suleyman (co-founder of AI pioneers DeepMind, now CEO of Inflection AI) composed his new book, The Coming Wave? Or perhaps the shade of Robert Oppenheimer darkened Suleyman’s descriptions of artificial intelligence, and his own not insignificant role in its rise? “Decades after their invention,” he muses, “the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident.”
Suleyman and his peers, having launched artificially intelligent systems upon the world, are right to tremble. At one point Suleyman compares AI to “an evolutionary burst like the Cambrian explosion, the most intense eruption of new species in the Earth’s history.”
The Coming Wave is mostly about the destabilising effects of new technologies. It describes a wildly asymmetric world where a single quantum computer can render the world’s entire encryption infrastructure redundant, and an AI mapping new drugs can be repurposed to look for toxins at the press of a return key.
Extreme futures beckon: would you prefer subjection under an authoritarian surveillance state, or radical self-reliance in a world where “an array of assistants… when asked to create a school, a hospital, or an army, can make it happen in a realistic timeframe”?
The predatory city states dominating this latter, neo-Renaissance future may seem attractive to some. Suleyman is not so sure: “Renaissance would be great,” he writes; “unceasing war with tomorrow’s military technology, not so much.”
A third future possibility is infocalypse, “where the information ecosystem grounding knowledge, trust, and social cohesion… falls apart.”
We’ll come back to this.
As we navigate between these futures, we should stay focused on current challenges. “I’ve gone to countless meetings trying to raise questions about synthetic media and misinformation, or privacy, or lethal autonomous weapons,” Suleyman complains, “and instead spent the time answering esoteric questions from otherwise intelligent people about consciousness, the Singularity, and other matters irrelevant to our world right now.”
Historian David Runciman makes an analogous point in The Handover, an impressive (and impressively concise) history of the limited liability company and the modern nation state. The emergence of both “artificial agents” at the end of the 18th Century was, Runciman argues, “the first Singularity”, when we tied our individual fates to two distinct but compatible autonomous computational systems.
“These bodies and institutions have a lot more in common with robots than we might think,” argues Runciman. Our political systems are already radically artificial and autonomous, and if we fail to appreciate this, we won’t understand what to do, or what to fear, when they acquire new flavours of intelligence.
Long-lived, sustainable, dynamic states — ones with a healthy balance between political power and civil society — won’t keel over under the onslaught of helpful AI, Runciman predicts. They’ll embrace it, and grow increasingly automated and disconnected from human affairs. How will we ever escape this burgeoning machine utopia?
Well, human freedom may still be a force to reckon with, according to Igor Tulchinsky. Writing with Christopher Mason in The Age of Prediction, Tulchinsky explores why the more predictable world ushered in by AI may not necessarily turn out to be a safer one. Humans evolved to take risks, and weird incentives emerge whenever predictability increases and risk appears to decline.
Tulchinsky, a quant who analyzes the data flows in financial markets, and Mason, a geneticist who maps dynamics across human and microbial genomes, make odd bedfellows. Mason, reasonably enough, welcomes any advance that makes medicine more reliable. Tulchinsky fears lest perfect prediction in the markets renders humans as docile and demoralised as cattle. The authors’ spirited dialogue illuminates their detailed survey of what predictive technologies actually do, in theatres from warfare to recruitment, policing to politics.
Let’s say Tulchinsky and Mason are right, and that individual free will survives governance by all-seeing machines. It does not follow at all that human societies will survive their paternalistic attentions.
This was the unexpected sting in the tail delivered by Edward Geist in Deterrence under Uncertainty, a heavyweight but unexpectedly gripping examination of AI’s role in nuclear warfare.
Geist, steeped in the history and tradecraft of deception, reckons the smartest agent — be it meat or machine — can be rendered self-destructively stupid by an elegant bit of subterfuge. Fakery is so cheap, easy and effective, Geist envisions a future in which artificially intelligent “fog-of-war machines” create a world that favours neither beligerents nor conciliators, but deceivers: “those who seek to confound and mislead their rivals.”
In Geist’s hands, Suleyman’s “infocalypse” becomes a weapon, far cleaner and cheaper than any mere bomb. Imagine future wars fought entirely through mind games. In this world of shifting appearances, littered with bloody accidents and mutual misconstruals, people are persuaded that their adversary does not want to hurt them. Rather than living in fear of retaliation, they come to realise the adversary’s values are, and always have been, better than their own.
Depending on your interests, your politics, and your sensitivity to disinformation, you may well suspect that this particular infocalyptic future is already upon us.
And, says Geist, at his most Machiavellian (he is the most difficult of the writers here; also the most enjoyable): “would it not be much more preferable for one’s adversaries to decide one had been right all along, and welcome one’s triumph?”