What about the unknown knowns?

Reading Nate Silver’s On the Edge and The Art of Uncertainty by David Spiegelhalter for the Spectator

The Italian actuary Bruno de Finetti, writing in 1931, was explicit: “Probability,” he wrote, “does not exist.”

Probability, it’s true, is simply the measure of an observer’s uncertainty, and in The Art of Uncertainty British statistician David Spiegelhalter explains how his extraordinary and much-derided science has evolved to the point where it is even able to say useful things about why things have turned out the way they have, based purely on present evidence. Spiegelhalter was a member of the Statistical Expert Group of the 2018 UK Infected Blood Inquiry, and you know his book’s a winner the moment he tells you that between 650 and 3,320 people nationwide died from tainted transfusions. By this late point, along with the pity and the horror, you have a pretty good sense of the labour and ingenuity that went into those peculiarly specific, peculiarly wide-spread numbers.

At the heart of Spiegelhalter’s maze, of course, squats Donald Rumsfeld, once pilloried for his convoluted syntax at a 2002 Department of Defense news briefing, and now immortalised for what came out of it: the best ever description of what it’s like to act under conditions of uncertainty. Rumsfeld’s “unknown unknowns” weren’t the last word, however; Slovenian philosopher Slavoj Žižek (it had to be Žižek) pointed out that there are also “unknown knowns” — “all the unconscious beliefs and prejudices that determine how we perceive reality.”

In statistics, something called Cromwell’s Rule cautions us never to bed absolute certainties (probabilities of 0 or 1) into our models. Still, “unknown knowns” fly easily under the radar, usually in the form of natural language. Spiegelhalter tells how, in 1961, John F Kennedy authorised the invasion of the Bay of Pigs, quite unaware of the minatory statistics underpinning the phrase “fair chance” in an intelligence briefing.

From this, you could draw a questionable moral: that the more we quantify the world, the better our decisions will be. Nate Silver — poker player, political pundit and author of 2012’s The Signal and the Noise — finds much to value in this idea. On the Edge, though, is more about the unforeseen consequences that follow.

There is a sprawling social ecosystem out there that Silver dubs “the River”, which includes “everyone from low-stakes poker pros just trying to grind out a living to crypto kings and adventure-capital billionaires.” On the Edge is, among many other things, a cracking piece of popular anthropology.

Riverians accept that it is very hard to be certain about anything; they abandon certainty for games of chance; and they end up treating everything as a market to be played.

Remember those chippy, cheeky chancers immortalised in films like 21 (2008: MIT’s Blackjack Team takes on Las Vegas) and Moneyball (2011: a young economist up-ends baseball)?

More than a decade has passed, and they’re not buccaneers any more. Today, says Silver, “the Riverian mindset is coming from inside the house.”

You don’t need to be a David Spiegelhalter to be a Riverian. All you need is the willingness to take bets on very long odds.

Professional gamblers learn when and how how to do this, and this is why that subset of gamblers called Silicon Valley venture capitalists are willing to back wilful contrarians like Elon Musk (on a good day) and (on a bad day) Ponzi-scheming crypto-crooks like Sam Bankman-Fried.

Success as a Riverian isn’t guaranteed. As Silver points out, “a lot of the people who play poker for a living would be better off — at least financially — doing something else.” Then again, those who make it in the VC game expect to double their money every four years. And those who find they’ve backed a Google or a SpaceX can find themselves living in a very odd world indeed.

Recently the billionaire set has been taking an interest and investing in “effective altruism”, a hyper-utilitarian dish cooked up by Oxford philosopher Will MacAskill. “EA” promises to multiply the effectiveness of acts of charity by studying their long-term effectiveness — a approach that naturally appeals to minds focused on quantification. Silver describes the state of the current movement, “stuck in the uncanny valley between being abstractly principled and ruthlessly pragmatic, with the sense that you can kind of make it up as you go along”. Who here didn’t see that one coming? Most of the original EA set now spend their time agonising over the apocalyptic potential of artificial intelligence.

The trick to Riverian thinking is to decouple things, in order to measure their value. Rather than say, “The Chick-fil-A CEO’s views on gay marriage have put me off my lunch,” you say, “the CEO’s views are off-putting, but this is a damn fine sandwich — I’ll invest.”

That such pragmatism might occasionally ding your reputation, we’ll take as read. But what happens when you do the opposite, glomming context after context onto every phenomenon in pursuit of some higher truth? Soon everything becomes morally equivalent to everything else and thinking becomes impossible.

Silver mentions a December 2023 congressional hearing in which the tone-deaf presidents of Harvard, Penn and MIT, in their sophomoric efforts to be right about all things all at once all the time, managed to argue their way into anti-Semitism. (It’s on YouTube if you haven’t seen it already. The only thing I can compare it to is how The Fast Show’s unlucky Alf used to totter invariably toward the street’s only open manhole.) No wonder that the left-leaning, non-Riverian establishment in politics and education is becoming, in Silver’s words, “a small island threatened by a rising tide of disapproval.”

We’d be foolish in the extreme to throw in our lot with the Riverians, though: people whose economic model reduces to: Bet long odds on the hobby-horses of contrarian asshats and never mind what gets broken in the process.

If we want a fairer, more equally apportioned world, these books should convince us that we should be spending less time worrying about what people are thinking, and concern ourselves more with how people are thinking.

We cannot afford to be ridden by unknown knowns.

 

“Fears about technology are fears about capitalism”

Reading How AI Will Change Your Life by Patrick Dixon and AI Snake Oil by Arvind Narayanan and Sayash Kapoor, for the Telegraph

According to Patrick Dixon, Arvind Narayanan and Sayash Kapoor, artificial intelligence will not bring about the end of the world. It isn’t even going to bring about the end of human civilisation. It’ll struggle even to take over our jobs. (If anything, signs point to a decrease in unemployment.)

Am I alone in feeling cheated here? In 2014, Stephen Hawking said we were doomed. A decade later, Elon Musk is saying much the same. Last year, Musk and other CEOs and scientists signed an open letter from the Future of Life Institute, demanding a pause on giant AI experiments.

But why listen to fiery warnings from the tech industry? Of 5,400 large IT projects (for instance, creating a large data warehouse for a bank) recorded by 2012 in a rolling database maintained by McKinsey, nearly half went over budget, and over half under-delivered. In How AI Will Change Your Life, author and business consultant Dixon remarks, “Such consistent failures on such a massive scale would never be tolerated in any other area of business.” Narayanan and Kapoor, both computer scientists, say that academics in this field are no better. “We probably shouldn’t care too much about what AI experts think about artificial general intelligence,” they write. “AI researchers have often spectacularly underestimated the difficulty of achieving AI milestones.”

These two very different books want you to see AI from inside the business. Dixon gives us plenty to think about: AI’s role in surveillance; AI’s role in intellectual freedom and copyright; AI’s role in warfare; AI’s role in human obsolescence – his exhaustive list runs to over two dozen chapters. Each of these debates matter, but we would be wrong to think that they are driven by, or were even about, technology at all. Again and again, they are issues of money: about how production gravitates towards automation to save labour costs; or about how AI tools are more often than not used to achieve imaginary efficiencies at the expense of the poor and the vulnerable. Why go to the trouble of policing poor neighbourhoods if the AI can simply round up the usual suspects? As the science-fiction writer Ted Chiang summed up in June 2023, “Fears about technology are fears about capitalism.”

As both books explain, there are three main flavours of artificial intelligence. Large language models power chatbots, of which GPT-4, Gemini and the like will be most familiar to readers. They are bullshitters, in the sense that they’re trained to produce plausible text, not accurate information, and so fall under philosopher Harry Frankfurt’s definition of bullshit as speech that is intended to persuade without regard for the truth. At the moment they work quite well, but wait a year or two: as the internet fills with AI-generated content, chatbots and their ilk will begin to regurgitate their own pabulum, and the human-facing internet will decouple from truth entirely.

Second, there are AI systems whose superior pattern-matching spots otherwise invisible correlations in large datasets. This ability is handy, going on miraculous, if you’re tackling significant, human problems. According to Dixon, for example, Klick Labs in Canada has developed a test that can diagnose Type 2 diabetes with over 85 per cent accuracy using just a few seconds of the patient’s voice. Such systems have proved less helpful, however, in Chicago. Narayanan and Kapoor report how, lured by promises of instant alerts to gun violence, the city poured nearly 49 million dollars into ShotSpotter, a system that has been questioned for its effectiveness after police fatally shot a 13-year-old boy in 2021.

Last of the three types is predictive AI: the least discussed, least successful, and – in the hands of the authors of AI Snake Oil (4 STARS) – by some way the most interesting. So far, we’ve encountered problems with AI’s proper working that are fixable, at least in principle. With bigger, better datasets – this is the promise – we can train AI to do better. Predictive AI systems are different. These are the ones that promise to find you the best new hires, flag students for dismissal before they start to flounder, and identify criminals before they commit criminal acts.

They won’t, however, because they can’t. Drawing broad conclusions about general populations is often the stuff of social science, and social science datasets tend to be small. But were you to have a big dataset about a group of people, would AI’s ability to say things about the group let it predict the behaviour of one of its individuals? The short answer is no. Individuals are chaotic in the same way as earthquakes are. It doesn’t matter how much you know about earthquakes; the one thing you’ll never know is where and when the next one will hit.

How AI Will Change Your Life is not so much a book as a digest of bullet points for a PowerPoint presentation. Business types will enjoy Dixon’s meticulous lists and his willingness to argue both sides against the middle. If you need to acquire instant AI mastery in time for your next board meeting, Dixon’s your man. Being a dilettante, I will stick with Narayanan and Kapoor, if only for this one-liner, which neatly captures our confused enthusiasm for little black boxes that promise the world. “It is,” they say, “as if everyone in the world has been given the equivalent of a free buzzsaw.”

 

 

Not even wrong

Reading Yuval Noah Harari’s Nexus for the Telegraph

In his memoirs, the German-British physicist Rudolf Peierls recalls the sighing response his colleague Wolfgang Pauli once gave to a scientific paper: “It is not even wrong.”

Some ideas are so incomplete, or so vague, that they can’t even be judged. Yuval Noah Harari’s books are notoriously full of such ideas. But then, given what Harari is trying to do, this may not matter very much.

Take this latest offering: a “brief history” that still finds room for viruses and Neanderthals, The Talmud and Elon Musk’s Neuralink and the Thirty Years’ War. Has Harari found a single rubric, under which to combine all human wisdom and not a little of its folly? Many a pub bore has entertained the same conceit. And Harari is tireless: “To appreciate the political ramifications of the mind–body problem,” Harari writes, “let’s briefly revisit the history of Christianity.” Harari is a writer who’s never off-topic but only because his topic is everything.

Root your criticism of Harari in this, and you’ve missed the point, which is that he’s writing this way on purpose. His single goal is to give you a taste of the links between things, without worrying too much about the things themselves. Any reader old enough to remember James Burke’s idiosyncratic BBC series Connections will recognise the formula, and know how much sheer joy and exhilaration it can bring to an audience that isn’t otherwise spending every waking hour grazing the “smart thinking” shelf at Waterstone’s.

Well-read people don’t need Harari.

Nexus’s argument goes like this: civilisations are (among other things) information networks. Totalitarian states centralise their information, which grows stale as a consequence. Democracies distribute their information, with checks and balances to keep the information fresh.

Harari’s key point here is that in neither case does the information have to be true. A great deal of it is not true. At best it’s intersubjectively true (Santa Claus, human rights and money are real by consensus: they have no basis in the material world.) Quite a lot of our information is fiction, and a fraction of that fiction is downright malicious falsehood.

It doesn’t matter to the network, which uses that information more or less agnostically, to establish order. Nor is this necessarily a problem, since an order based on truth is likely to be a lot more resilient and pleasant to live under than an order based on cultish blather.

This typology gives Harari the chance to wax lyrical over various social and cultural arrangements, historical and contemporary. Marxism and populism both get short shrift, in passages that are memorable, pithy, and, dare I say it, wise.

In the second half of the book, Harari invites us to stare like rabbits into the lights of the on-coming AI juggernaut. Artificial intelligence changes everything, Harari says, because just as human’s create inter-subjective realities, computers create inter-computer realities. Pokemon Go is an example of an intercomputer reality. So — rather more concerningly — are the money markets.

Humans disagree with each other all the time, and we’ve had millennia to practice thinking our way into other heads. The problem is that computers don’t have any heads. Their intelligence is quite unlike our own. We don’t know what They’re thinking because, by any reasonable measure, “thinking” does not describe what They are doing.

Even this might not be a problem, if only They would stop pretending to be human. Harari cites a 2022 study showing that the 5 per cent of Twitter users that are bots are generating between 20 and 30 per cent of the site’s content.

Harari quotes Daniel Dennett’s blindingly obvious point that, in a society where information is the new currency, we should ban fake humans the way we once banned fake coins.

And that is that, aside from the shouting — and there’s a fair bit of that in the last pages, futurology being a sinecure for people who are not even wrong.

Harari’s iconoclastic intellectual reputation is wholly undeserved, not because he does a bad job, but because he does such a superb job of being the opposite of an iconoclast. Harari sticks the world together in a gleaming shape that inspires and excites. If it holds only for as long as it takes to read the book, still, dazzled readers should feel themselves well served.

Just which bits of the world feel human to you?

Reading Animals, Robots, Gods by Webb Keane for New Scientist

No society we know of ever lived without morals. Roughly the same ethical ideas arise, again and again, in the most diverse societies. Where do these ideas of right and wrong come from? Might there be one ideal way to live?

Michigan-based anthropologist Webb Keane argues that morality does not arise from universal principles, but from the human imagination. Moral ideas are sparked in the friction between objectivity, when we think about the world as if it were a story, and subjectivity, in which we’re in some sort of conversation with the world.

A classic trolley problem eludicates Keane’s point. If you saw an out-of-control trolley (tram car) hurtling towards five people, and could pull a switch that sent the trolley down a different track, killing only one innocent bystander, you would more than likely choose to pull the lever. If, on the other hand, you could save five people by pushing an innocent bystander into the path of the trolley (using him, in Keane’s delicious phrase, “as an ad hoc trolley brake”), you’d more than likely choose not to interfere. The difference in your reaction turns on whether you are looking at the situation objectively, at some mechanical remove, or whether you subjectively imagine yourself in the thick of the action.

What moral attitude we adopt to situations depends on how socially charged we think they are. I’d happily kick a stone down the road; I’d never kick a dog. Where, though, are the boundaries of this social world? If you can have a social relationship with your pet dog, can you have one with your decorticate child? Your cancer tumour? Your god?

Keane says that it’s only by asking such questions that we acquire morals in the first place. And we are constantly trying to tell the difference between the social and the non-social, testing connections and experimenting with boundaries, because the question “just what is a human being, anyway?” lies at the heart of all morality.

Readers of Animals, Robots, Gods will encounter a wide range of non-humans, from sacrificial horses to chatbots, with whom they might conceivably establish a social relationship. Frankly, it’s too much content for so short a book. Readers interested in the ethics of artifical intelligence, for instance, won’t find much new insight here. On the other hand, I found Keane’s distillation of fieldwork into the ethics of hunting and animal sacrifice both gripping and provoking.

We also meet humans enhanced and maintained by technology. Keane reports a study by anthropologist Cheryl Mattingly in which devout Los Angles-based Christians Andrew and Darlene refuse to turn off the machines keeping their brain-dead daughter alive. The doctors believe that, in the effort to save her, their science has at last cyborgised the girl to the point at which she is no longer a person. The parents believe that, medically maintained or not, cognate or not, their child’s being alive is significant, and sufficient to make her a person. This is hardly some simplistic “battle between religion and science”. Rather, it’s an argument about where we set the boundaries within which we apply moral imperatives like the one telling us not to kill.

Morals don’t just guide lived experience: they arise from lived experience. There can be no trolley problems without trolleys. This, Keane argues, is why morality and ethics are best approached from an anthropological perspective. “We cannot make sense of ethics, or expect them of others, without understanding what makes them inhabitable, possible ways to live,” he writes. “And we should neither expect, nor, I think, hope that the diversity of ways of life will somehow converge onto one ‘best’ way of living.”

We communicate best with strangers when we accept them as moral beings. A western animal rights activist would never hunt an animal. A Chewong hunter from Malaysia wouldn’t dream of laughing at one. And if these strangers really want to get the measure of each other, they should each ask the same, devastatingly simple question:

Just which bits of the world feel human to you?

Life trying to understand itself

Reading Life As No One Knows It: The Physics of Life’s Emergence by Sara Imari Walker and  The Secret Life of the Universe by Nathalie A Cabrol, for the Telegraph

How likely is it that we’re not alone in the universe? The idea goes in and out of fashion. In 1600 the philosopher Giordano Bruno was burned at the stake for this and other heterdox beliefs. Exactly 300 years later the French Académie des sciences announced a prize for establishing communication with life anywhere but on Earth or Mars — since people already assumed that Martians did exist.

The problem — and it’s the speck of grit around which these two wildly different books accrete — is that we’re the only life we know of. “We are both the observer and the observation,” says Nathalie Cabrol, chief scientist at the SETI Institute in California and author of The Secret Life of the Universe, already a bestseller in her native France: “we are life trying to understand itself and its origin.”

Cabrol reckons this may be only a temporary problem, and there are two strings to her optimistic argument.

First, the universe seems a lot more amenable toward life than it used to. Not long ago, and well within living memory, we didn’t know whether stars other than our sun had planets of their own, never mind planets capable of sustaining life. The Kepler Space Telescope, launched in March 2009, changed all that. Among the wonders we’ve detected since — planets where it rains molten iron, or molten glass, or diamonds, or metals, or liquid rubies or sapphires — are a number of rocky planets, sitting in the habitable zones of their stars, and quite capable of hosting oceans on their surface. Well over half of all sun-like stars boast such planets. We haven’t even begun to quantify the possibility of life around other kinds of star. Unassuming, plentiful and very long-lived M-dwarf stars might be even more life-friendly.

Then there are the ice-covered oceans of Jupiter’s moon Europa, and Saturn’s moon Enceladus, and the hydrocarbon lakes and oceans of Saturn’s Titan, and Pluto’s suggestive ice volcanoes, and — well, read Cabrol if you want a vivid, fiercely intelligent tour of what may turn out to be our teeming, life-filled solar system.

The second string to Cabrol’s argument is less obvious, but more winning. We talk about life on Earth as if it’s a single family of things, with one point of origin. But it isn’t. Cabrol has spent her career hunting down extremophiles (ask her about volcano diving in the Andes) and has found life “everywhere we looked, from the highest mountain to the deepest abyss, in the most acidic or basic environments, the hottest and coldest regions, in places devoid of oxygen, within rocks — sometimes under kilometers of them — within salts, in arid deserts, exposed to radiation or under pressure”.

Several of these extremophiles would have no problem colonising Mars, and it’s quite possible that a more-Earth-like Mars once seeded Earth with life.

Our hunt for earth-like life — “life like ours” — always had a nasty circularity about it. By searching for an exact mirror of ourselves, what other possibilities were we missing? In The Secret Life Cabrol argues that we now know enough about life to hunt for radically strange lifeforms, in wildly exotic environments.

Sara Imari Walker agrees. In Life As No One Knows It, the American theoretical physicist does more than ask how strange life may get; she wonders whether we have any handle at all on what life actually is. All these words of ours — living, lifelike, animate, inanimate, — may turn out to be hopelessly parochial as we attempt to conceptualise the possibilities for complexity and purpose in the universe. (Cabrol makes a similar point: “Defining Life by describing it,” she fears, “as the same as saying that we can define the atmosphere by describing a bird flying in the sky.”

Walker, a physicist, is painfully aware that among the phenomena that current physics can’t explain are physicists — and, indeed, life in general. (Physics, which purports to uncover an underlying order to reality, is really a sort of hyper-intellectual game of whack-a-mole in which, to explain one phenomenon, you quite often have to abandon your old understanding of another.) Life processes don’t contradict physics. But physics can’t explain them, either. It can’t distinguish between, say, a hurricane and the city of New York, seeing both as examples of “states of organisation maintained far from equilibrium”.

But if physics can’t see the difference, physicists certainly can, and Walker is a fiercely articulate member of that generation of scientists and philosophers — physicists David Deutsch and Chiara Marletto and the chemist Leroy Cronin are others — who are out to “choose life”, transforming physics in the light of evolution.

We’re used to thinking that living things are the product of selection. Walker wants us to imagine that every object in the universe, whether living or not, is the product of selection. She wants us to think of the evolutionary history of things as a property, as fundamental to objects as charge and mass are to atoms.

Walker’s defence of her “assembly theory” is a virtuoso intellectual performance: she’s like the young Daniel Dennett, full of wit, mischief and bursts of insolent brevity which for newcomers to this territory are like oases in the desert.

But to drag this back to where we started: the search for extraterrestrial life — did you know that there isn’t enough stuff in the universe to make all the small molecules that could perfom a function in our biology? Even before life gets going, the chemistry from which it is built has to have been massively selected — and we know blind chance isn’t responsible, because we already know what undifferentiated masses of small organic molecules look like; we call this stuff tar.

In short, Walker shows us that what we call “life” is but an infinitesimal fraction of all the kinds of life which may arise out of any number of wholly unfamiliar chemistries.

“When we can run origin-of-life experiments at scale, they will allow us to predict how much variation we should expect in different geochemical environments,” Walker writes. So once again, we have to wait, even more piqued and anxious than before, to meet aliens even stranger than we have imagined or maybe can imagine.

Cabrol, in her own book, makes life even more excruciating for those of us who just want to shake hands with E.T.: imagine, she says, “a shadow biome” of living things so strange, they could be all around us here, on Earth — and we would never know.

Benignant?

Reading the Watermark by Sam Mills for the Times

“Every time I encounter someone,” celebrity novelist Augustus Fate reveals, near the start of Sam Mills’s new novel The Watermark, “ I feel a nagging urge to put them in one of my books.”

He speaks nothing less than the literal truth. Journalist and music-industry type Jaime Lancia and his almost-girlfriend, a suicidally inclined artist called Rachel Levy, have both succumbed to Fate’s drugged tea, and while their barely-alive bodies are wasting away in the attic of his Welsh cottage, their spirits are being consigned to a curious half-life as fictional characters. It takes a while for them to awake to their plight, trapped in Thomas Turridge, Fate’s unfinished (and probably unfinishable) Victorianate new novel. The malignant appetites of this paperback Prospero have swallowed rival novelists, too, making Thomas Turridge only the first of several ur-fictional rabbit holes down which Jaime and Rachel must tumble.

Over the not inconsiderable span of The Watermark, we find our star-crossed lovers evading asylum orders in Victorian Oxford, resisting the blandishments of a fictional present-day Manchester, surviving spiritual extinction in a pre-Soviet hell-hole evocatively dubbed “Carpathia”, and coming domestically unstuck in a care robot-infested near-future London.

Meta-fictions are having a moment. The other day I saw Bertrand Bonello’s new science fiction film The Beast, which has Léa Seydoux and George MacKay playing multiple versions of themselves in a tale that spans generations and which ends, yes, in a care-robot-infested future. Perhaps this coincidence is no more than a sign of the coming-to-maturity of a generation who (finally!) understand science fiction.

In 1957 Philip Dick wrote a short sweet novel called Eye in the Sky, which drove its cast through eight different subjective realities, each one “ruled” by a different character. While Mills’s The Watermark is no mere homage to that or any other book, it’s obvious she knows how to tap, here and there, into Dick’s madcap energy, in pursuit of her own game.

The Watermark is told variously from Jaime and Rachel’s point of view. In some worlds, Jaime wakes up to their plight and must disenchant Rachel. In other worlds, Rachel is the knower, Jaime the amnesiac. Being fictional characters as well as real-life kidnap victims, they must constantly be contending with the spurious backstories each fiction lumbers on them. These aren’t always easy to throw over. In one fiction, Jaime and Rachel have a son. Are they really going to abandon him, just so they can save their real lives?

Jaime, over the course of his many transmogrifications, is inclined to fight for his freedom. Rachel is inclined to bed down in fictional worlds that, while existentially unfree, are an improvement on real life — from which she’s already tried to escape by suicide.

The point of all this is to show how we hedge our lives around with stories, not because they are comforting (although they often are) but because stories are necessary: without them, we wouldn’t understand anything about ourselves or each other. Stories are thinking. By far the strongest fictional environment here is 1920s-era Carpathia. Here, a totalitarian regime grinds the star-crossed couple’s necessary fictions to dust, until at last they take psychic refuge in the bodies of wolves and birds.

The Watermark never quite coheres. It takes a conceit best suited to a 1950s-era science-fiction novelette (will our heroes make it back to the real world?), couples it to a psychological thriller (what’s up with Rachel?), and runs this curious algorithm through the fictive mill not once but five times, by which time the reader may well have had a surfeit of “variations on a theme”. Rightly, for a novel of this scope and ambition, Mills serves up a number of false endings on the way to her denouement, and the one that rings most psychologically true is also the most bathetic: “We were supposed to be having our grand love story, married and happy ever after,” Rachel observes, from the perspective of a fictional year 2049, “but we ended up like every other screwed-up middle-aged couple.”

It would be easy to write off The Watermark as a literary trifle. But I like trifle, and I especially appreciate how Mills’s protagonists treat their absurd bind with absolute seriousness. Farce on the outside, tragedy within: this book is full of horrid laughter.

But Mills is not a natural pasticheur, and unfortunately it’s in her opening story, set in Oxford in 1861, that her ventriloquism comes badly unstuck. A young woman “in possession of chestnut hair”? A vicar who “tugs at his ebullient mutton-chops, before resuming his impassioned tirade”? On page 49, the word “benignant”? This is less pastiche, more tin-eared tosh.

Against this serious failing, what defences can we muster? Quite a few. A pair of likeable protagonists who stand up surprisingly well to their repeated eviscerations. A plot that takes storytelling seriously, and would rather serve the reader’s appetites than sneer at them. Last but not least, some excellent incidental invention: to wit, a long-imprisoned writer’s idea of what the 1980s must look like (“They will drink too much ale and be in possession of magical machines”) and, elsewhere, a mother’s choice of bedtime reading material (“The Humanist Book of Classic Fairy Tales, retold by minor, marginalised characters”) .

But it’s as Kurt Vonnnegut said: “If you open a window and make love to the world, so to speak, your story will get pneumonia.” To put it less kindly: nothing kills the novel faster than aspiration. The Watermark, that wanted to be a very big book about everything, becomes, in the end, something else: a long, involved, self-alienating exploration of itself.

One of those noodly problems

Reading The Afterlife of Data by Carl Öhman for the Spectator

They didn’t call Diogenes “the Cynic” for nothing. He lived to shock the (ancient Greek) world. When I’m dead, he said, just toss my body over the city walls to feed the dogs. The bit of me that I call “I” won’t be around to care.

The revulsion we feel at this idea tells us something important: that the dead can be wronged. Diogenes may not care what happens to his corpse, but we do. And doing right by the dead is a job of work. Some corpses are reduced to ash, some are buried, and some are fed to vultures. In each case the survivors all feel, rightly, that they have treated their loved ones’ remains with respect.

What should we do with our digital remains?

This sounds like one of those noodly problems that keep digital ethicists like Öhman in grant money — but some of the stories in The Afterlife of Data are sure to make the most sceptical reader stop and think. There’s something compelling, and undeniably moving, in one teenager’s account of how, ten years after losing his father, he found they could still play together; at least, he could compete against his dad’s last outing on an old XBox racing game.

Öhman is not spinning ghost stories here. He’s not interested in digital afterlives. He’s interested in remains, and in emerging technologies that, from the digital data we inadvertently leave behind, fashion our artificially intelligent simulacra. (You may think this is science fiction, but Microsoft doesn’t, and has already taken out several patents.)

This rapidly approaching future, Öhman argues, seems uncanny only because death itself is uncanny. Why should a chatty AI simulacrum prove any more transgressive than, say, a photograph of your lost love, given pride of place on the mantelpiece? We got used to the one; in time we may well get used to the other.

What should exercise us is who owns the data. As Öhman argues, ‘if we leave the management of our collective digital past solely in the hands of industry, the question “What should we do with the data of the dead?” becomes solely a matter of “What parts of the past can we make money on?”’

The trouble with a career in digital ethics is that however imaginative and insightful you get, you inevitably end up playing second-fiddle to some early episode of Charlie Brooker’s TV series Black Mirror. The one entitled “Be Right Back”, in which a dead lover returns in robot form to market upgrades of itself to the grieving widow, stands waiting at the end of almost every road Öhman travels here.

Öhman reminds us that the digital is a human realm, and one over which we can and must and must exert our values. Unless we actively delete them (in a sort of digital cremation, I suppose) our digital dead are not going away, and we are going to have to accommodate them somehow.

A more modish, less humane writer would make the most of the fact that recording has become the norm, so that, as Öhman puts it, “society now takes place in a domain previously reserved for the dead, namely the archive.” (And, to be fair, Öhman does have a lot of fun with the idea that by 2070, Facebook’s dead will outnumber its living.)

Ultimately, though, Öhman draws readers through the digital uncanny to a place of responsibility. Digital remains are not just a representation of the dead, he says, “they are the dead, an informational corpse constitutive of a personal identity.”

Öhman’s lucid, closely argued foray into the world of posthumous data is underpinned by this sensible definition of what constitute a person: “A person,” he says, “is the narrative object that we refer to when speaking of someone (including ourselves) in the third person. Persons extend beyond the selves that generate them.” If I disparage you behind your back, I’m doing you a wrong, even though you don’t know about it. If I disparage you after you’re dead, I’m still doing you wrong, though you’re no longer around to be hurt.

Our job is to take ownership of each others’ digital remains and treat them with human dignity. The model Öhman holds up for us to emulate is the Bohemian author and composer Max Brod, who had the unenviable job of deciding what to do with manuscripts left behind by his friend Franz Kafka, who wanted him to burn them. In the end Brod decided that the interests of “Kafka”, the informational body constitutive of a person, overrode (barely) the interests of Franz his no-longer-living friend.

What to do with our digital remains? Öhman’s excellent reply treats this challenge with urgency, sanity and, best of all, compassion. Max Brod’s decision wasn’t and isn’t obvious, and really, the best you can do in these situations is to make the error you and others can best live with.

Geometry’s sweet spot

Reading Love Triangle by Matt Parker for the Telegraph

“These are small,” says Father Ted in the eponymous sitcom, and he holds up a pair of toy cows. “But the ones out there,” he explains to Father Dougal, pointing out the window, “are far away.”

It may not sound like much of a compliment to say that Matt Parker’s new popular mathematics book made me feel like Dougal, but fans of Graham Linehan’s masterpiece will understand. I mean that I felt very well looked after, and, in all my ignorance, handled with a saint-like patience.

Calculating the size of an object from its spatial position has tried finer minds than Dougal’s. A long virtuoso passage early on in Love Triangle enumerates the half-dozen stages of inductive reasoning required to establish the distance of the largest object in the universe — a feature within the cosmic web of galaxies called The Giant Ring. Over nine billion light years away, the Giant Ring still occupies 34.5 degrees of the sky: now that’s what I call big and far away.

Measuring it has been no easy task, and yet the first, foundational step in the calculation turns out to be something as simple as triangulating the length of a piece of road.

“Love Triangle”, as no one will be surprised to learn, is about triangles. Triangles were invented (just go along with me here) in ancient Egypt, where the regularly flooding river Nile obliterated boundary markers for miles around and made rural land disputes a tiresome inevitability. Geometry, says the historian Herodotus around 430 BC, was invented to calculate the exact size of a plot of land. We’ve no reason to disbelieve him.

Parker spends a good amount of time demonstrating the practical usefulness of basic geometry, that allows us to extract the shape and volume of triangular space from a single angle and the length of a single side. At one point, on a visit to Tokyo, he uses a transparent ruler and a tourist map to calculate the height of the city’s tallest tower, the SkyTree.

Having shown triangles performing everyday miracles, he then tucks into their secret: “Triangles,” he explains, “are in the sweet spot of having enough sides to be a physical shape, while still having enough limitations that we can say generalised and meaningful things about them.” Shapes with more sides get boring really quickly, not least because they become so unwieldy in higher dimensions, which is where so many of the joys of real mathematics reside.

Adding dimensions to triangles adds just one corner per dimension. A square, on the other hand, explodes, doubling its number of corners with each dimension. (A cube has eight.) This makes triangles the go-to shape for anyone who wants to assemble meshes in higher dimensions. All sorts of complicated paths are brought within computational reach, making possible all manner of civilisational triumphs, including (but not limited to) photorealistic animations.

So many problems can be cracked by reducing them to triangles, there is an entire mathematical discipline, trigonometry, concerned with the relationships between their angles and side lengths. Parker’s adventures on the spplied side of trigonometry become, of necessity, something of a blooming, buzzing confusion, but his anecdotes are well judged and lead the reader seamlessly into quite complex territory. Ever wanted to know how Kathleen Lonsdale applied Fourier transforms to X-ray waves, making possible Rosalind Franklin’s work on DNA structure? Parker starts us off on that journey by wrapping a bit of paper around a cucumber and cutting it at a slant. Half a dozen pages later, we may not have the firmest grasp of what Parker calls the most incredible bit of maths most people have never heard of, but we do have a clear map of what we do not know.

Whether Parker’s garrulousness charms you or grates on you will be a matter of taste. I have a pious aversion to writers who feel the need to cheer their readers through complex material every five minutes. But it’s hard not to tap your foot to cheap music, and what could be cheaper than Parker’s assertion that introducing coordinates early on in a maths lesson “could be considered ‘putting Descartes before the course’”?

Parker has a fine old time with his material, and only a curmudgeon can fail to be charmed by his willingness to call Heron’s two-thousand-year-old formula for finding the area of a triangle “stupid” (he’s not wrong, neither) and the elongated pentagonal gyrocupolarotunda a “dumb shape”.

“For survival reasons, I must spread globally”

Reading Trippy by Ernesto Londono for the Telegraph

Ernesto Londoño’s enviable reputation as a journalist was forged in the conflict zones of Iraq and Afghanistan. In 2017 he landed his dream job as the New York Times Brazil bureau chief, with a roving brief, talented and supportive colleagues, and a high-rise apartment in Rio de Janeiro.

When, not long after, he nearly-accidentally-on-purpose threw himself off his balcony, he knew he was in serious emotional trouble.

It was more than whimsy that led him to look for help at a psychedelic retreat in the Amazon hamlet of Mushu Inu, a place with no running water, where the shower facility consisted of a large tub guarded by a couple of tarantulas. He had seen what taking antidepressant medications had done for acquaintances in the US military (nothing good), and thought to write at first hand about what, in the the US, has become an increasingly popular alternative therapy: drinking ayahuasca tea.

Ayahuasca is prepared by boiling chunks of an Amazonian vine called Banisteriopsis caapi with the leaves of a shrubby plant called Psychotria viridis. The leaves contain a psychoactive compound, and the vines stop the drinker from metabolising it too quickly. The experience that follows is, well, trippy.

By disrupting routine patterns of thought and memory processing, psychedelic trips offer depressed and traumatised people a reprieve from their obsessive thought patterns. They offer them a chance to recalibrate and reinterpret past experiences. How they do this is up to them, however, and this is why psychedelics are anything but a harmless recreational drug. It’s as possible to step out of a bad trip screaming psychotically at the trees as it is to emerge, Buddha-like, from a carefully guided psychedelic experience. The Yawanawá people of the Amazon, who have effectively become global ambassadors for the brew (which, incidentally, they’ve only been making for a few hundred years) make no bones about its harmful potential. The predominantly western organisers of ayahuasca-fuelled tourist retreats are rather less forthcoming.

Psychedelics promise revolutionary treatments for PTSD. In the US, pharmaceutical researchers funded by government are attempting to subtract all the whacky, enjoyable and humane elements of the ayahuasca experience, and thereby distil a kind of aspirin for war trauma. It’s a singularly dystopian project, out to erase the affect of atrocities in the minds of those who might, thanks to that very treatment, be increasingly inclined to perpetrate them.

On one ayahausca webforum, meanwhile, the brew speaks to her counter-cultural acolytes. “If I don’t spread globally I will face extinction, similar to Humans,” a feminised ayahuasca cuppa proclaims. “For survival reasons, I must spread globally, while Humans must accept my sacred medicine to heal their afflicted soul.”

Londono has drunk the brew, if not the Kool-Aid, and says his ayahuasca experiences saved, if not his life, then at very least his capacity for happiness. He maintains a great affection for the romantics and idealists who he depicts in pursuit, according to their different lights, of the good and the healthful in psychedelic experience.

His own survey leads him from psychedelic “bootcamps” in the rainforest to upscale clinics in Costa Rica tending to the global one per cent, to US “churches”, who couch therapy as religious experience so that they can import ayahuasca and get around the strictures of the DEA. The most startling sections, for me, dealt with Santo Daime, a syncretic Brazilian faith that contrives to combine ayahuasca with a proximal Catholic liturgy.

Trippy is told, as much as possible, in the first person, through anecdote and memoir. Seeing the perils and the promise of psychedelic experience play out in Londono’s own mind, as he comes to terms over years with his own quite considerable personal traumas, is a privilege, though it brings with it moments of tedium, as though we were being expected to sit through someone’s gushing account of their cheese dreams. This — let’s call it the stupidity of seriousness — is a besetting tonal problem with the introspective method. William James fell foul of it in The Principles of Psychology of 1890, so it would be a bit rich of me to twit Londono about it in 2024.

Still, it’s fair to point out, I think, that Londono, an accomplished print journalist, is writing, day on day, for a readership of predominantly US liberals — surely the most purse-lipped and conservative readership on Earth. So maybe, with Trippy as our foundation, we should now seek out a looser, more gonzo treatment: one wild enough to handle the wholesale spiritual regearing promised by the psychedelics coming to a clinic, church, and holiday brochure near you.

 

“The most efficient conformity engines ever invented”

Reading The Anxious Generation by Jonathan Haidt for The Spectator, 30 March 2024

What’s not to like about a world in which youths are involved in fewer car accidents, drink less, and wrestle with fewer unplanned pregnancies?

Well, think about it: those kids might not be wiser; they might simply be afraid of everything. And what has got them so afraid? A little glass rectangle, “a portal in their pockets” that entices them into a world that’s “exciting, addictive, unstable and… unsuitable for children”.

So far, so paranoid — and there’s a delicious tang of the documentary-maker Adam Curtis about social psychologist Jonathan Haidt’s extraordinarily outspoken, extraordinarily well-evidenced diatribe against the creators of smartphone culture, men once hailed, “as heroes, geniuses, and global benefactors who,” Haidt says, “like Prometheus, brought gifts from the gods to humanity.”

The technological geegaw Haidt holds responsible for the “great rewiring” of brains of people born after 1995 is not, interestingly enough, the iPhone itself (first released in 2007) but its front-facing camera, released with the iPhone 4 in June 2010. Samsung added one to its Galaxy the same month. Instagram launched in the same year. Now users could curate on-line versions of themselves on the fly — and they do, incessantly. Maintaining an on-line self is a 24/7 job. The other day on Crystal Palace Parade I had to catch a pram from rolling into the street while the young mother vogued and pouted into her smartphone.

Anecdotes are one thing; evidence is another. The point of The Anxious Generation is not to present phone-related pathology as though it were a new idea, but rather to provide robust scientific evidence for what we’ve all come to assume is true: that there is causal link (not just some modish dinner-party correlation) between phone culture and the ever more fragile mental state of our youth. “These companies,” Haidt says, “have rewired childhood and changed human development on an almost unimaginable scale.”

Haidt’s data are startling. Between 2010 and 2015, depression in teenage girls and boys became two and a half times more prevalent. From 2010 to 2020, the rate of self-harm among young adolescent girls nearly tripled. The book contains a great many bowel-loosening graphs, with titles like “High Psychological Distress, Nordic Nations” and “Alienation in School, Worldwide”. There’s one in particular I can’t get out of my head, showing the percentage of US students in 8th 10th and 12th grade who said they were happy in themselves. Between 2010 and 2015 this “self-satisfaction score” falls off a cliff.

The Anxious Generation revises conclusions Haidt drew in 2018, while collaborating with the lawyer Greg Lukianoff on The Coddling of the American Mind. Subtitled “How good intentions and bad ideas are setting up a generation for failure”, that book argued that universities and other institutes of higher education (particularly in the US) were teaching habits of thinking so distorted, they were triggering depression and anxiety among their students. Why else would students themselves be demanding that colleges protect them from books and speakers that made them feel “unsafe”? Ideas that had caused little or no controversy in 2010 “were, by 2015, said to be harmful, dangerous, or traumatising,” Haidt remembers.

Coddling’s anti-safe-space, “spare the rod and spoil the child” argument had merit, but Haidt soon came to realise it didn’t begin to address the scale of the problem: “by 2017 it had become clear that the rise of depression and anxiety was happening in many countries, to adolescents of all educational levels, social classes and races.”

Why are people born after 1996 so — well — different? So much more anxious, so much more judgemental, so much more miserable? Phone culture is half of Haidt’s answer; the other is a broader argument about “safetyism”, which Haidt defines as “the well-intentioned and disastrous shift toward overprotecting children and restricting their autonomy in the ‘real world’.”

Boys suffer more from being shut in and overprotected. Girls suffer more from the way digital technologies monetize and weaponise peer hierarchies. Although the gender differences are interesting, it’s the sheer scale of harms depicted here that should galvanise us. Haidt’s suggested solutions are common sense and commonplace: stop punishing parents for letting their children have some autonomy. Allow children plenty of unstructured free play. Ban phones in school.

For Gen-Z, this all comes too late. Over-protection in the real world, coupled with an almost complete lack of protections in the virtual world, has consigned a generation of young minds to what is in essence a play-free environment. In the distributed, unspontaneous non-space of the digital device, every action is performed in order to achieve a prescribed goal. Every move is strategic. “Likes” and “comments”, “thumbs-up” and “thumbs-down” provide immediate real time metrics on the efficacy or otherwise of thousands of micro-decisions an hour, and even trivial mistakes bring heavy costs.

In a book of devastating observations, this one hit home very hard: that these black mirrors of ours are “the most efficient conformity engines ever invented”.