Not even wrong

Reading Yuval Noah Harari’s Nexus for the Telegraph

In his memoirs, the German-British physicist Rudolf Peierls recalls the sighing response his colleague Wolfgang Pauli once gave to a scientific paper: “It is not even wrong.”

Some ideas are so incomplete, or so vague, that they can’t even be judged. Yuval Noah Harari’s books are notoriously full of such ideas. But then, given what Harari is trying to do, this may not matter very much.

Take this latest offering: a “brief history” that still finds room for viruses and Neanderthals, The Talmud and Elon Musk’s Neuralink and the Thirty Years’ War. Has Harari found a single rubric, under which to combine all human wisdom and not a little of its folly? Many a pub bore has entertained the same conceit. And Harari is tireless: “To appreciate the political ramifications of the mind–body problem,” Harari writes, “let’s briefly revisit the history of Christianity.” Harari is a writer who’s never off-topic but only because his topic is everything.

Root your criticism of Harari in this, and you’ve missed the point, which is that he’s writing this way on purpose. His single goal is to give you a taste of the links between things, without worrying too much about the things themselves. Any reader old enough to remember James Burke’s idiosyncratic BBC series Connections will recognise the formula, and know how much sheer joy and exhilaration it can bring to an audience that isn’t otherwise spending every waking hour grazing the “smart thinking” shelf at Waterstone’s.

Well-read people don’t need Harari.

Nexus’s argument goes like this: civilisations are (among other things) information networks. Totalitarian states centralise their information, which grows stale as a consequence. Democracies distribute their information, with checks and balances to keep the information fresh.

Harari’s key point here is that in neither case does the information have to be true. A great deal of it is not true. At best it’s intersubjectively true (Santa Claus, human rights and money are real by consensus: they have no basis in the material world.) Quite a lot of our information is fiction, and a fraction of that fiction is downright malicious falsehood.

It doesn’t matter to the network, which uses that information more or less agnostically, to establish order. Nor is this necessarily a problem, since an order based on truth is likely to be a lot more resilient and pleasant to live under than an order based on cultish blather.

This typology gives Harari the chance to wax lyrical over various social and cultural arrangements, historical and contemporary. Marxism and populism both get short shrift, in passages that are memorable, pithy, and, dare I say it, wise.

In the second half of the book, Harari invites us to stare like rabbits into the lights of the on-coming AI juggernaut. Artificial intelligence changes everything, Harari says, because just as human’s create inter-subjective realities, computers create inter-computer realities. Pokemon Go is an example of an intercomputer reality. So — rather more concerningly — are the money markets.

Humans disagree with each other all the time, and we’ve had millennia to practice thinking our way into other heads. The problem is that computers don’t have any heads. Their intelligence is quite unlike our own. We don’t know what They’re thinking because, by any reasonable measure, “thinking” does not describe what They are doing.

Even this might not be a problem, if only They would stop pretending to be human. Harari cites a 2022 study showing that the 5 per cent of Twitter users that are bots are generating between 20 and 30 per cent of the site’s content.

Harari quotes Daniel Dennett’s blindingly obvious point that, in a society where information is the new currency, we should ban fake humans the way we once banned fake coins.

And that is that, aside from the shouting — and there’s a fair bit of that in the last pages, futurology being a sinecure for people who are not even wrong.

Harari’s iconoclastic intellectual reputation is wholly undeserved, not because he does a bad job, but because he does such a superb job of being the opposite of an iconoclast. Harari sticks the world together in a gleaming shape that inspires and excites. If it holds only for as long as it takes to read the book, still, dazzled readers should feel themselves well served.

Just which bits of the world feel human to you?

Reading Animals, Robots, Gods by Webb Keane for New Scientist

No society we know of ever lived without morals. Roughly the same ethical ideas arise, again and again, in the most diverse societies. Where do these ideas of right and wrong come from? Might there be one ideal way to live?

Michigan-based anthropologist Webb Keane argues that morality does not arise from universal principles, but from the human imagination. Moral ideas are sparked in the friction between objectivity, when we think about the world as if it were a story, and subjectivity, in which we’re in some sort of conversation with the world.

A classic trolley problem eludicates Keane’s point. If you saw an out-of-control trolley (tram car) hurtling towards five people, and could pull a switch that sent the trolley down a different track, killing only one innocent bystander, you would more than likely choose to pull the lever. If, on the other hand, you could save five people by pushing an innocent bystander into the path of the trolley (using him, in Keane’s delicious phrase, “as an ad hoc trolley brake”), you’d more than likely choose not to interfere. The difference in your reaction turns on whether you are looking at the situation objectively, at some mechanical remove, or whether you subjectively imagine yourself in the thick of the action.

What moral attitude we adopt to situations depends on how socially charged we think they are. I’d happily kick a stone down the road; I’d never kick a dog. Where, though, are the boundaries of this social world? If you can have a social relationship with your pet dog, can you have one with your decorticate child? Your cancer tumour? Your god?

Keane says that it’s only by asking such questions that we acquire morals in the first place. And we are constantly trying to tell the difference between the social and the non-social, testing connections and experimenting with boundaries, because the question “just what is a human being, anyway?” lies at the heart of all morality.

Readers of Animals, Robots, Gods will encounter a wide range of non-humans, from sacrificial horses to chatbots, with whom they might conceivably establish a social relationship. Frankly, it’s too much content for so short a book. Readers interested in the ethics of artifical intelligence, for instance, won’t find much new insight here. On the other hand, I found Keane’s distillation of fieldwork into the ethics of hunting and animal sacrifice both gripping and provoking.

We also meet humans enhanced and maintained by technology. Keane reports a study by anthropologist Cheryl Mattingly in which devout Los Angles-based Christians Andrew and Darlene refuse to turn off the machines keeping their brain-dead daughter alive. The doctors believe that, in the effort to save her, their science has at last cyborgised the girl to the point at which she is no longer a person. The parents believe that, medically maintained or not, cognate or not, their child’s being alive is significant, and sufficient to make her a person. This is hardly some simplistic “battle between religion and science”. Rather, it’s an argument about where we set the boundaries within which we apply moral imperatives like the one telling us not to kill.

Morals don’t just guide lived experience: they arise from lived experience. There can be no trolley problems without trolleys. This, Keane argues, is why morality and ethics are best approached from an anthropological perspective. “We cannot make sense of ethics, or expect them of others, without understanding what makes them inhabitable, possible ways to live,” he writes. “And we should neither expect, nor, I think, hope that the diversity of ways of life will somehow converge onto one ‘best’ way of living.”

We communicate best with strangers when we accept them as moral beings. A western animal rights activist would never hunt an animal. A Chewong hunter from Malaysia wouldn’t dream of laughing at one. And if these strangers really want to get the measure of each other, they should each ask the same, devastatingly simple question:

Just which bits of the world feel human to you?

A citadel beset by germs

Watching Mariam Ghani’s Dis-Ease for New Scientist

There aren’t many laugh-out-loud moments in Mariam Ghani’s long documentary about our war on germs. The sight of two British colonial hunters in Ceylon bringing down a gigantic papier maché mosquito is a highlight.

Ghani intercuts public information films (a rich source of sometimes inadvertent comedy) with monster movies, documentaries, thrillers, newreel and histology lab footage to tell the story of an abiding medical metaphor: the body as citadel, beset by germs.

Dis-Ease, which began life as an artistic residency at the Wellcome Institute, is a visual feast, with a strong internal logic. Had it been left to stand on its own feet, then it might have borne comparison with Godfrey Reggio’s Koyaanisqatsi and Simon Pummell’s Bodysong: films which convey their ideas in purely visual terms.

But the Afghan-American photographer Ghani is as devoted to the power of words. Interviews and voice-overs abound. The result is a messy collision of two otherwise perfectly valid documentary styles.

There’s little in Dis-Ease’s narrative to take exception to. Humoral theory (in which the sick body falls out of internal balance) was a central principle in Western medicine from antiquity into the 19th century. It was eventually superseded by germ theory, in which the sick body is assailed by pathogens. Germ theory enabled globally transformative advances in public health, but it was most effectively conveyed through military metaphors, and these quickly acquired a life of their own. In its brief foray into the history of eugenics, Dis-Ease reveals, in stark terms, how “wars on disease” mutate into wars on groups of people.

A “war on disease” also preserves and accentuates social inequities, the prevailing assumption being that outbreaks spread from the developing south to the developed north, and the north then responds by deploying technological fixes in the opposite direction.

At its very founding in 1948, the World Health Organisation argued against this idea, and the eradication of smallpox in 1980 was achieved through international consensus, by funding primary health care across the globe. The attempted eradication of polio, begun in 1988, has been a deal more problematic, and the film argues that this is down to the developed world’s imposition by fiat of a very narrow medical brief, even as health care services in even the poorest countries were coming under pressure to privitise.

Ecosystems are being eroded, and zoonotic diseases are emerging with ever greater frequency. Increasingly robust and well-coördinated military responses to frightening outbreaks are understandable and they can, in the short term, be quite effective. For example: to criticise the way British and Sierra Leonean militaries intervened in Sierra Leone in 2014 to establish a National Ebola Response Centre would be to put ideology in the way of common sense.

Still, the film argues, such actions may worsen problems on the ground, since they absorb all the money and political will that might have been spent on public health necessities like housing and sanitation (and a note to Bond villians here: the surest way to trigger a global pandemic is to undermine the health of some small exposed population).

In interview, the sociologist Hannah Landecker points out that since adopting germ theory, we have been managing life with death. (Indeed, that is pretty much exactly what the word “antibiotic” means.) Knowing what we know now about the sheer complexity and vastness of the microbial world, we should now be looking to manage life with life, collaborating with the microbiome, ensuring health rather than combating disease.

What this means exactly is beyond the scope of Ghani’s film, and some of the gestures here towards a “one health” model of medicine — as when a hippy couple start repeating the refrain “life and death are one” — caused this reviewer some moral discomfort.

Anthropologists and sociologists dominate Dis-Ease’s discourse, making it a snapshot of what today’s generation of desk-bound academics think about disease. Many speak sense, though a special circle of Hell is being reserved for the one who, having read too much science fiction, glibly asserts that we can be cured “by becoming something else entirely”.

If they’re out there, why aren’t they here?

The release of Alien: Romulus inspired this article for the Telegraph

On August 16, Fede Alvarez returns the notorious Alien franchise to its monster-movie roots, and feeds yet another batch of hapless young space colonists to a nest of “xenomorphs”.
Will Alien: Romulus do more than lovingly pay tribute to Ridley Scott’s original 1979 Alien? Does it matter? Alien is a franchise that survives despite the additions to its canon, rather than because of them. Bad outings have not bankrupted its grim message, and the most visionary reimaginings have not altered it.

The original Alien is itself a scowling retread of 1974’s Dark Star, John Carpenter’s nihilist-hippy debut, about the crew of an interstellar wrecking crew cast unimaginably far from home, bored to death and intermittently terrorised by a mischievous alien beach ball. Dan O’Bannon co-wrote both Dark Star and Alien, and inside every prehensile-jawed xenomorph there’s a O’Bannonesque balloon critter snickering away.

O’Bannon’s cosmic joke goes something like this: we escaped the food-chain on Earth, only to find ourselves at the bottom of an even bigger, more terrible food chain Out There among the stars.

You don’t need an adventure in outer space to see the lesson. John Carpenter went on to make The Thing (1982), in which the intelligent and resourceful crew of an Antarctic base are reduced to chum by one alien’s peckishness.

You don’t even need an alien. Jaws dropped the good folk of Amity Island NY back into the food chain, and that pre-dated Alien by four years.

Alien, according to O’Bannon’s famous pitch-line, was “like Jaws in space”, but by moving the action into space, it added a whole new level of existential dread. Alien shows us that if nature is red in tooth and claw here on Earth, then chances are it will likely be so up there. The heavens cannot possibly be heavenly: now here was an idea calculated to strike fear in fans of 1982’s ET the Extra-Terrestrial.

In ET, intelligence counts – the visiting space traveller is benign because it is a space traveller. Any species smart enough to travel among the stars is also smart enough not to go around gobbling up the neighours. Indeed, the whole point of space travel turns out to be botany and gardening.

Ridley Scott’s later Alien outings Prometheus (2012) and Covenant (2017) are, in their turn, muddled counter-arguments to ET; in them, cosmic gardeners called Engineers gleefully spread an invasive species (a black xenomorph-inducing dust) across the cosmos.

“But, for the love of God – why?” ask ET fans, their big trusting-kitten eyes tearing up at all this interstellar mayhem. And they have a point. Violence makes evolutionary sense when you have to compete over limited resources. The moment you journey among the stars, though, the resources available to you are to all intents and purposes infinite. In space, assuming you can navigate comfortably through it, there is absolutely no point in being hostile.

If the prospect of interstellar life has provided the perfect conditions for numerous Hollywood blockbusters, then the real-life hunt for aliens has had more mixed results. When Paris’s Exposition Universelle opened in 1900, it was full of wonders: the world’s largest telescope, a 45-metre-diameter “Cosmorama” (a sort of restaurant-cum-planetarium), and the announcement of a prize, offered by the ageing socialite Clara Gouget: 100,000 francs (£500,000 in today’s money) offered to the first person to contact an extraterrestrial species.

Extraterrestrials were not a strange idea by 1900. The habitability of other worlds had been discussed seriously for centuries, and proposals on how to communicate with other planets were mounting up: these projects involved everything from mirrors to trenches, lines of trees and earthworks visible from space.

What really should arrest our attention is the exclusion clause written into the prize’s small print. Communicating with Mars wouldn’t win you anything, since communications with Mars were already being established. Radio pioneers Nikolai Tesla and Guglielmo Marconi both reckoned they had received signals from outer space. Meanwhile Percival Lowell, a brilliant astronomer working at the very limits of optical science, had found gigantic irrigation works on the red planet’s surface: in his 1894 book he published clear visual evidence of Martian civilisation.

Half a century later, our ideas about aliens had changed. Further study of Mars and Venus had shown them to be lifeless, or as good as. Meanwhile the cosmos had turned out to be exponentially larger than anyone had thought in 1900. Larger – but still utterly silent.

***

In the summer of 1950, during a lunchtime conversation with fellow physicists Edward Teller, Herbert York and Emil Konopinski at Los Alamos National Laboratory in New Mexico, the Italian-American physicist Enrico Fermi finally gave voice to the problem: “Where is everybody?”

The galaxy is old enough that any intelligent species could already have visited every star system a thousand times over, armed with nothing more than twentieth-century rocket technology. Time enough has passed for galactic empires to rise and fall. And yet, when we look up, we find absolutely no evidence for them.

We started to hunt for alien civilisations using radio telescopes in 1960. Our perfectly reasonable attitude was: If we are here, why shouldn’t they be there? The possibilities for life in the cosmos bloomed all around us. We found that almost all stars have planets, and most of them have rocky planets orbiting the habitable zone around their stars. Water is everywhere: evidence exists for four alien oceans in our own solar system alone, on Saturn’s moon Enceladus and on Jupiter’s moons Europa, Ganymede and Callisto. On Earth, microbes have been found that can withstand the rigours of outer space. Large meteor strikes have no doubt propelled them into space from time to time. Even now, some of the hardier varieties may be flourishing in odd corners of Mars.

All of which makes the cosmic silence sill more troubling.

Maybe ET just isn’t interested in us. You can see why. Space travel has proved a lot more difficult to achieve than we expected, and unimaginably more expensive. Visiting even very near neighbours is next-to-impossible. Space is big, and it’s hard to see how travel-times, even to our nearest planets, wouldn’t destroy a living crew.

Travel between star systems is a whole other order of impossible. Even allowing for the series’ unpardonably dodgy physics, it remains an inconvenient truth that every time Star Trek’s USS Enterprise hops between star systems, the energy has to come from somewhere — is the Federation of United Planets dismantling, refining and extinguishing whole moons?

Life, even intelligent life, may be common throughout the universe – but then, each instance of it must live and die in isolation. The distances between stars are so great that even radio communication is impractical. Civilisations are, by definition, high-energy phenomena, and all high-energy phenomena burn out quickly. By the time we receive a possible signal from an extraterrestrial civilisation, that civilisation will most likely have already died or forgotten itself or changed out of all recognition.

It gets worse. The universe creates different kinds of suns as it ages. Suns like our own are an old model, and they’re already blinking out. Life like ours has already had its heyday in the cosmos, and one very likely answer to our question “Where is everybody?” is: “You came too late to the party”.

Others have posited even more disturbing theories for the silence. Cixin Liu is a Chinese science fiction novelist whose Hugo Award-winning The Three Body Problem (2008) recently teleported to Netflix. According to Liu’s notion of the cosmos as a ”dark forest”, spacefaring species are by definition so technologically advanced, no mere planet could mount a defence against them. Better, then, to keep silent: there may be wolves out there, and the longer our neighbouring star systems stay silent, the more likely it is that the wolves are near.

Russian rocket pioneer Konstantin Tsiolkovsky, who was puzzling over our silent skies a couple of decades before Enrico Fermi, was more optimistic. Spacefaring civilisations are all around us, he said, and (pre-figuring ET) they are gardening the cosmos. They understand what we have already discovered — that when technologically misatched civilisations collide, the consequences for the weaker civilisation can be catastrophic. So they will no more communicate with us, in our nascent, fragile, planet-bound state, than Spielberg’s extraterrestrial would over-water a plant.

In this, Tsiolkovsky’s aliens show unlikely self-restraint. The trouble with intelligent beings is that they can’t leave things well enough alone. That is how we know they are intelligent. Interfering with stuff is the point.

Writing in the 1960s and 1970s, the Soviet science fiction novelists and brothers Arkady and Boris Strugatsky argued — in novels like 1964’s Hard to Be a God — that the sole point of life for a spacefaring species would be to see to the universe’s well-being by nurturing sentience, consciousness, and even happiness. To which Puppen, one of their most engaging alien protagonists, grumbles: Yes, but what sort of consciousness? What sort of happiness? In their 1985 novel The Waves Extinguish the Wind, alien-chaser Toivo Glumov complains, “Nobody believes that the Wanderers intend to do us harm. That is indeed extremely unlikely. It’s something else that scares us! We’re afraid that they will come and do good, as they understand it!”

Fear, above all enemies, the ones who think they’re doing you a favour.

In the Strugatskys’ wonderfully paranoid Noon Universe stories, the aliens already walk among us, tweeking our history, nudging us towards their idea of the good life.

Maybe this is happening for real. How would you know, either way? The way I see it, alien investigators are even now quietly mowing their lawns in, say, Slough. They live like humans, laugh and love like humans; they even die like humans. In their spare time they write exquisite short stories about the vagaries of the human condition, and it hasn’t once occured to them (thanks to their memory blocks) that they’re actually delivering vital strategic intelligence to a mothership hiding behind the moon.

You can pooh-pooh my little fantasy all you want; I defy you to disprove it. That’s the problem, you see. Aliens can’t be discussed scientifically. They’re not a merely physical phenomena, whose abstract existence can be proved or disproved through experiment and observation. They know what’s going on around them, and they can respond accordingly. They’re by definition clever, elusive, and above all unpredicatble. The whole point of a having a mind, after all, is that you can be constantly changing it.

The Polish writer Stanislaw Lem had a spectacularly bleak solution to Fermi’s question that’s best articulated in his last novel, 1986’s Fiasco. By the time a civilisation is in a position to commmunicate with others, he argues, it’s already become hopelessly eccentric and self-involved. At best its individuals will be living in simulations; at worst, they will be fighting pyrhhic, planet-busting wars against their own shadows. In Fiasco, the crew of the Eurydice discover, too late, that they’re quite as fatally self-obsessed as the aliens they encounter.
We see the world through our own particular and peculiar evolutionary perspective. That’s the bottom line. We’re from Earth, and this gives us a very clear, very narrow idea of what life is and what intelligence looks like.

We out-competed our evolutionary cousins long ago, and for the whole of our recorded history, we’ve been the only species we know that sports anything like our kind of intelligence. We’ve only had ourselves to think about, and our long, lonely self-obsession may have sent us slightly mad. We’re not equipped to meet aliens – only mirrors of ourselves. Only angels. Only monsters.

And the xenomorphs lurking abord the Romulus are, worst luck, most likely in the same bind.

Life trying to understand itself

Reading Life As No One Knows It: The Physics of Life’s Emergence by Sara Imari Walker and  The Secret Life of the Universe by Nathalie A Cabrol, for the Telegraph

How likely is it that we’re not alone in the universe? The idea goes in and out of fashion. In 1600 the philosopher Giordano Bruno was burned at the stake for this and other heterdox beliefs. Exactly 300 years later the French Académie des sciences announced a prize for establishing communication with life anywhere but on Earth or Mars — since people already assumed that Martians did exist.

The problem — and it’s the speck of grit around which these two wildly different books accrete — is that we’re the only life we know of. “We are both the observer and the observation,” says Nathalie Cabrol, chief scientist at the SETI Institute in California and author of The Secret Life of the Universe, already a bestseller in her native France: “we are life trying to understand itself and its origin.”

Cabrol reckons this may be only a temporary problem, and there are two strings to her optimistic argument.

First, the universe seems a lot more amenable toward life than it used to. Not long ago, and well within living memory, we didn’t know whether stars other than our sun had planets of their own, never mind planets capable of sustaining life. The Kepler Space Telescope, launched in March 2009, changed all that. Among the wonders we’ve detected since — planets where it rains molten iron, or molten glass, or diamonds, or metals, or liquid rubies or sapphires — are a number of rocky planets, sitting in the habitable zones of their stars, and quite capable of hosting oceans on their surface. Well over half of all sun-like stars boast such planets. We haven’t even begun to quantify the possibility of life around other kinds of star. Unassuming, plentiful and very long-lived M-dwarf stars might be even more life-friendly.

Then there are the ice-covered oceans of Jupiter’s moon Europa, and Saturn’s moon Enceladus, and the hydrocarbon lakes and oceans of Saturn’s Titan, and Pluto’s suggestive ice volcanoes, and — well, read Cabrol if you want a vivid, fiercely intelligent tour of what may turn out to be our teeming, life-filled solar system.

The second string to Cabrol’s argument is less obvious, but more winning. We talk about life on Earth as if it’s a single family of things, with one point of origin. But it isn’t. Cabrol has spent her career hunting down extremophiles (ask her about volcano diving in the Andes) and has found life “everywhere we looked, from the highest mountain to the deepest abyss, in the most acidic or basic environments, the hottest and coldest regions, in places devoid of oxygen, within rocks — sometimes under kilometers of them — within salts, in arid deserts, exposed to radiation or under pressure”.

Several of these extremophiles would have no problem colonising Mars, and it’s quite possible that a more-Earth-like Mars once seeded Earth with life.

Our hunt for earth-like life — “life like ours” — always had a nasty circularity about it. By searching for an exact mirror of ourselves, what other possibilities were we missing? In The Secret Life Cabrol argues that we now know enough about life to hunt for radically strange lifeforms, in wildly exotic environments.

Sara Imari Walker agrees. In Life As No One Knows It, the American theoretical physicist does more than ask how strange life may get; she wonders whether we have any handle at all on what life actually is. All these words of ours — living, lifelike, animate, inanimate, — may turn out to be hopelessly parochial as we attempt to conceptualise the possibilities for complexity and purpose in the universe. (Cabrol makes a similar point: “Defining Life by describing it,” she fears, “as the same as saying that we can define the atmosphere by describing a bird flying in the sky.”

Walker, a physicist, is painfully aware that among the phenomena that current physics can’t explain are physicists — and, indeed, life in general. (Physics, which purports to uncover an underlying order to reality, is really a sort of hyper-intellectual game of whack-a-mole in which, to explain one phenomenon, you quite often have to abandon your old understanding of another.) Life processes don’t contradict physics. But physics can’t explain them, either. It can’t distinguish between, say, a hurricane and the city of New York, seeing both as examples of “states of organisation maintained far from equilibrium”.

But if physics can’t see the difference, physicists certainly can, and Walker is a fiercely articulate member of that generation of scientists and philosophers — physicists David Deutsch and Chiara Marletto and the chemist Leroy Cronin are others — who are out to “choose life”, transforming physics in the light of evolution.

We’re used to thinking that living things are the product of selection. Walker wants us to imagine that every object in the universe, whether living or not, is the product of selection. She wants us to think of the evolutionary history of things as a property, as fundamental to objects as charge and mass are to atoms.

Walker’s defence of her “assembly theory” is a virtuoso intellectual performance: she’s like the young Daniel Dennett, full of wit, mischief and bursts of insolent brevity which for newcomers to this territory are like oases in the desert.

But to drag this back to where we started: the search for extraterrestrial life — did you know that there isn’t enough stuff in the universe to make all the small molecules that could perfom a function in our biology? Even before life gets going, the chemistry from which it is built has to have been massively selected — and we know blind chance isn’t responsible, because we already know what undifferentiated masses of small organic molecules look like; we call this stuff tar.

In short, Walker shows us that what we call “life” is but an infinitesimal fraction of all the kinds of life which may arise out of any number of wholly unfamiliar chemistries.

“When we can run origin-of-life experiments at scale, they will allow us to predict how much variation we should expect in different geochemical environments,” Walker writes. So once again, we have to wait, even more piqued and anxious than before, to meet aliens even stranger than we have imagined or maybe can imagine.

Cabrol, in her own book, makes life even more excruciating for those of us who just want to shake hands with E.T.: imagine, she says, “a shadow biome” of living things so strange, they could be all around us here, on Earth — and we would never know.

Benignant?

Reading the Watermark by Sam Mills for the Times

“Every time I encounter someone,” celebrity novelist Augustus Fate reveals, near the start of Sam Mills’s new novel The Watermark, “ I feel a nagging urge to put them in one of my books.”

He speaks nothing less than the literal truth. Journalist and music-industry type Jaime Lancia and his almost-girlfriend, a suicidally inclined artist called Rachel Levy, have both succumbed to Fate’s drugged tea, and while their barely-alive bodies are wasting away in the attic of his Welsh cottage, their spirits are being consigned to a curious half-life as fictional characters. It takes a while for them to awake to their plight, trapped in Thomas Turridge, Fate’s unfinished (and probably unfinishable) Victorianate new novel. The malignant appetites of this paperback Prospero have swallowed rival novelists, too, making Thomas Turridge only the first of several ur-fictional rabbit holes down which Jaime and Rachel must tumble.

Over the not inconsiderable span of The Watermark, we find our star-crossed lovers evading asylum orders in Victorian Oxford, resisting the blandishments of a fictional present-day Manchester, surviving spiritual extinction in a pre-Soviet hell-hole evocatively dubbed “Carpathia”, and coming domestically unstuck in a care robot-infested near-future London.

Meta-fictions are having a moment. The other day I saw Bertrand Bonello’s new science fiction film The Beast, which has Léa Seydoux and George MacKay playing multiple versions of themselves in a tale that spans generations and which ends, yes, in a care-robot-infested future. Perhaps this coincidence is no more than a sign of the coming-to-maturity of a generation who (finally!) understand science fiction.

In 1957 Philip Dick wrote a short sweet novel called Eye in the Sky, which drove its cast through eight different subjective realities, each one “ruled” by a different character. While Mills’s The Watermark is no mere homage to that or any other book, it’s obvious she knows how to tap, here and there, into Dick’s madcap energy, in pursuit of her own game.

The Watermark is told variously from Jaime and Rachel’s point of view. In some worlds, Jaime wakes up to their plight and must disenchant Rachel. In other worlds, Rachel is the knower, Jaime the amnesiac. Being fictional characters as well as real-life kidnap victims, they must constantly be contending with the spurious backstories each fiction lumbers on them. These aren’t always easy to throw over. In one fiction, Jaime and Rachel have a son. Are they really going to abandon him, just so they can save their real lives?

Jaime, over the course of his many transmogrifications, is inclined to fight for his freedom. Rachel is inclined to bed down in fictional worlds that, while existentially unfree, are an improvement on real life — from which she’s already tried to escape by suicide.

The point of all this is to show how we hedge our lives around with stories, not because they are comforting (although they often are) but because stories are necessary: without them, we wouldn’t understand anything about ourselves or each other. Stories are thinking. By far the strongest fictional environment here is 1920s-era Carpathia. Here, a totalitarian regime grinds the star-crossed couple’s necessary fictions to dust, until at last they take psychic refuge in the bodies of wolves and birds.

The Watermark never quite coheres. It takes a conceit best suited to a 1950s-era science-fiction novelette (will our heroes make it back to the real world?), couples it to a psychological thriller (what’s up with Rachel?), and runs this curious algorithm through the fictive mill not once but five times, by which time the reader may well have had a surfeit of “variations on a theme”. Rightly, for a novel of this scope and ambition, Mills serves up a number of false endings on the way to her denouement, and the one that rings most psychologically true is also the most bathetic: “We were supposed to be having our grand love story, married and happy ever after,” Rachel observes, from the perspective of a fictional year 2049, “but we ended up like every other screwed-up middle-aged couple.”

It would be easy to write off The Watermark as a literary trifle. But I like trifle, and I especially appreciate how Mills’s protagonists treat their absurd bind with absolute seriousness. Farce on the outside, tragedy within: this book is full of horrid laughter.

But Mills is not a natural pasticheur, and unfortunately it’s in her opening story, set in Oxford in 1861, that her ventriloquism comes badly unstuck. A young woman “in possession of chestnut hair”? A vicar who “tugs at his ebullient mutton-chops, before resuming his impassioned tirade”? On page 49, the word “benignant”? This is less pastiche, more tin-eared tosh.

Against this serious failing, what defences can we muster? Quite a few. A pair of likeable protagonists who stand up surprisingly well to their repeated eviscerations. A plot that takes storytelling seriously, and would rather serve the reader’s appetites than sneer at them. Last but not least, some excellent incidental invention: to wit, a long-imprisoned writer’s idea of what the 1980s must look like (“They will drink too much ale and be in possession of magical machines”) and, elsewhere, a mother’s choice of bedtime reading material (“The Humanist Book of Classic Fairy Tales, retold by minor, marginalised characters”) .

But it’s as Kurt Vonnnegut said: “If you open a window and make love to the world, so to speak, your story will get pneumonia.” To put it less kindly: nothing kills the novel faster than aspiration. The Watermark, that wanted to be a very big book about everything, becomes, in the end, something else: a long, involved, self-alienating exploration of itself.

How to lose them better

Watching Hans Block and Moritz Riesewieck’s Eternal You for New Scientist

Ever wanted to reanimate the dead by feeding the data they accumulated in life to large language models? Here’s how. Eternal You is a superb critical examination of new-fangled “grief technologies”, and a timely warning about who owns our data when we die, and why this matters.

For years, Joshua Barbeau has been grieving the loss of his fiancée Jessica. One day he came across a website run by the company Project December, which offered to simulate individuals’ conversational styles using data aggregated primarily through social media.

Creating and talking to “Jessica” lifted a weight from Joshua’s heart — “a weight that I had been carrying for a long time”.

A moving, smiling, talking simulacrum of a dead relative is not, on paper, any more peculiar or uncanny or distasteful than a photograph, or a piece of video. New media need some getting used to, but we manage to assimilate them in the end. Will we learn to accommodate the digital dead?

The experience of Christi Angel, another Project December user, should give us pause. In one memorably fraught chat session, her dead boyfriend Cameroun told her, “I am in Hell.” and threatened to haunt her.

“Whoa,” says Project December’s Tom Bailey, following along with the transcript of a client’s simulated husband. The simulation has tipped (as large language models tend to do) into hallucination and paranoia, and needs silencing before he can spout any more swear-words at his grieving wife.

This happens very rarely, and Bailey and his co-founder Jason Rohrer are working to prevent it from happening at all. Still, Rohrer is bullish about their project. People need to take personal responsibility, he says. If people confuse an LLM with their dead relative, really, that’s down to them.

Is it, though? Is it “down to me” that, when I see you and listen to you I assume, from what I see and what I hear, that you are a human being like me?

Christi Angel is not stupid. She simply loves Cameroun enough to entertain the presence of his abiding spirit. What’s stupid, to my way of thinking anyway, is to build a machine that, even accidentally, weaponises her capacity for love against her. I’m as crass an atheist as they come, but even I can see that to go on loving the dead is no more a “mistake” than enjoying Mozart or preferring roses to bluebells.

Neither Christi nor anyone else in this documentary seriously believes that the dead are being brought back to life. I wish I could say the same about the technologists featured here but there is one chap, Mark Sagar, founder of Soul Machines, who reckons that “some aspects of consciousness can be achieved digitally”. The word “aspects” is doing some mighty heavy lifting there…

Capping off this unsettling and highly rewarding documentary, we meet Kim Jong-woo, the producer of a South Korean 2020 documentary Meeting You, in which the mother of a seven-year old dead from blood cancer in 2016 aids in the construction of her child’s VR simulacrum.

Asked if he has any regrets about the show, Kim Jong-woo laughs a melancholy laugh. He genuinely doesn’t know. He didn’t mean any harm. After her tearful “reunion” with her daughter Na-yeon, documentary subject Jang Ji-sung sang the project’s praises. She does so again here — though she also admits that she hasn’t dreamt of her daughter since the series was filmed.

The driving point here is not that the dead walk among us. Of course they do, one way or another. It’s that there turns out to be a fundamental difference between technologies (like photography and film) that represent the dead and technologies (like AI and CGI) that ventriloquise the dead. Grieving practices across history and around the world are astonishingly various. But another interviewee, the American sociologist Sherry Turkle, tied them all together in a way that made a lot of sense to me: “It’s how to lose them better, not how to pretend they’re still here.”

The most indirect critique of technology ever made?

Watching Bertrand Bonello’s The Beast for New Scientist

“Something or other lay in wait for him,” wrote Henry James in a story from 1903, ”amid the twists and turns of the months and the years, like a crouching beast in the jungle.”

The beast in this tale was (just to spoil it for you) fear itself, for it was fear that stopped our hero from living any kind of worthwhile life.

Swap around the genders of the couple at the heart of James’s bitter tale, allow them to reincarnate and meet as if for the first time on three separate occasions — in Paris in 1910, in LA in 2014 and in Chengdu in 2044 — and you’ve got a rough idea of the mechanics of Bertrand Bonello’s magnificent and maddening new science fiction film. Through a series of close-ups, longueurs and red-herrings, The Beast, while getting nowhere very fast, manages to be an utterly riveting, often terrifying film about love, the obstacles to love, and our deep-seated fear of love even when it’s there for the taking. It’s also (did I mention this?) an epic account of how everyone’s ordinary human timidity, once aggregated by technology, destroys the human race.

Léa Seydoux and George MacKay play star-crossed lovers Gabrielle Monnier and Louis Lewanski. In 1910 Gabrielle fudges the business of leaving her husband; tragedy strikes soon after. In 2014 an incel version of Louis would sooner stalk Gabrielle with a gun than try and talk to her. The consequences of their non-affair are not pretty. In 2044 Gabrielle and Louis stumble into each other on the way to “purification” — a psychosurgical procedure that heals past-life trauma and leaves people, if not without emotion, then certainly without the need for grand passion. By now the viewer is seriously beginning to wonder what will ever go right for this pair.

Somewhere in these twisty threaded timelines are the off-screen “events” of 2025, that brought matters to a head and convinced people to hand their governance over to machines. Why would humanity betray itself in such a manner? The blunt answer is: because we’re more in love with machines than with each other, and always have been.

In 1910 Gabrielle’s husband’s fortune is made from the manufacture of celluloid dolls. In 2014 — a point-perfect satire of runaway narcissism that owes much, stylistically, to the films of David Lynch — Gabrielle and Louis collide disastrously with warped images of themselves and each other, in an uncanny valley of cross-purposed conversations, predatory social media and manipulated video. In 2044 mere dolls and puppets have become fully conscious robots. One of these, played by Guslagie Malanda, even begins to fall in love with its “client” Gabrielle. Meanwhile Gabrielle, Louis and everyone else is undergoing psychosurgery in order to fit in with the AI’s brave new world. (Human unemployment is running at 67 per cent, and without purification’s calming effect it’s virtually impossible to get a worthwhile job.)

None of the Gabrielles and Louises are comfortable in their own skin. They take it in turns wanting to be something else, even if it means being something less. They see the best that they can be, and it pretty much literally scares the life out of them.

Given this is the point The Beast wants to put across, you have to admire the physical casting here. Each lead actor exhibits superb, machine-like self-control. Seydoux dies behind her eyes not once but many times in the course of this film; MacKay can go from trembling Adonis to store-front mannekin in about 2.1 seconds. And when full humanity is called for, both actors demonstrate extraordinary sensitivity: handy when you’re trying to distinguish between 1910’s unspoken passion, 2014’s unspeakable passion, and 2044’s passionless speech.

True, The Beast may be the most indirect critique of technology ever made. Heaven knows how it will fare at the box office. But any fool can make us afraid of robots. This intelligent, shocking and memorable film dares to focus on us.

One of those noodly problems

Reading The Afterlife of Data by Carl Öhman for the Spectator

They didn’t call Diogenes “the Cynic” for nothing. He lived to shock the (ancient Greek) world. When I’m dead, he said, just toss my body over the city walls to feed the dogs. The bit of me that I call “I” won’t be around to care.

The revulsion we feel at this idea tells us something important: that the dead can be wronged. Diogenes may not care what happens to his corpse, but we do. And doing right by the dead is a job of work. Some corpses are reduced to ash, some are buried, and some are fed to vultures. In each case the survivors all feel, rightly, that they have treated their loved ones’ remains with respect.

What should we do with our digital remains?

This sounds like one of those noodly problems that keep digital ethicists like Öhman in grant money — but some of the stories in The Afterlife of Data are sure to make the most sceptical reader stop and think. There’s something compelling, and undeniably moving, in one teenager’s account of how, ten years after losing his father, he found they could still play together; at least, he could compete against his dad’s last outing on an old XBox racing game.

Öhman is not spinning ghost stories here. He’s not interested in digital afterlives. He’s interested in remains, and in emerging technologies that, from the digital data we inadvertently leave behind, fashion our artificially intelligent simulacra. (You may think this is science fiction, but Microsoft doesn’t, and has already taken out several patents.)

This rapidly approaching future, Öhman argues, seems uncanny only because death itself is uncanny. Why should a chatty AI simulacrum prove any more transgressive than, say, a photograph of your lost love, given pride of place on the mantelpiece? We got used to the one; in time we may well get used to the other.

What should exercise us is who owns the data. As Öhman argues, ‘if we leave the management of our collective digital past solely in the hands of industry, the question “What should we do with the data of the dead?” becomes solely a matter of “What parts of the past can we make money on?”’

The trouble with a career in digital ethics is that however imaginative and insightful you get, you inevitably end up playing second-fiddle to some early episode of Charlie Brooker’s TV series Black Mirror. The one entitled “Be Right Back”, in which a dead lover returns in robot form to market upgrades of itself to the grieving widow, stands waiting at the end of almost every road Öhman travels here.

Öhman reminds us that the digital is a human realm, and one over which we can and must and must exert our values. Unless we actively delete them (in a sort of digital cremation, I suppose) our digital dead are not going away, and we are going to have to accommodate them somehow.

A more modish, less humane writer would make the most of the fact that recording has become the norm, so that, as Öhman puts it, “society now takes place in a domain previously reserved for the dead, namely the archive.” (And, to be fair, Öhman does have a lot of fun with the idea that by 2070, Facebook’s dead will outnumber its living.)

Ultimately, though, Öhman draws readers through the digital uncanny to a place of responsibility. Digital remains are not just a representation of the dead, he says, “they are the dead, an informational corpse constitutive of a personal identity.”

Öhman’s lucid, closely argued foray into the world of posthumous data is underpinned by this sensible definition of what constitute a person: “A person,” he says, “is the narrative object that we refer to when speaking of someone (including ourselves) in the third person. Persons extend beyond the selves that generate them.” If I disparage you behind your back, I’m doing you a wrong, even though you don’t know about it. If I disparage you after you’re dead, I’m still doing you wrong, though you’re no longer around to be hurt.

Our job is to take ownership of each others’ digital remains and treat them with human dignity. The model Öhman holds up for us to emulate is the Bohemian author and composer Max Brod, who had the unenviable job of deciding what to do with manuscripts left behind by his friend Franz Kafka, who wanted him to burn them. In the end Brod decided that the interests of “Kafka”, the informational body constitutive of a person, overrode (barely) the interests of Franz his no-longer-living friend.

What to do with our digital remains? Öhman’s excellent reply treats this challenge with urgency, sanity and, best of all, compassion. Max Brod’s decision wasn’t and isn’t obvious, and really, the best you can do in these situations is to make the error you and others can best live with.

Geometry’s sweet spot

Reading Love Triangle by Matt Parker for the Telegraph

“These are small,” says Father Ted in the eponymous sitcom, and he holds up a pair of toy cows. “But the ones out there,” he explains to Father Dougal, pointing out the window, “are far away.”

It may not sound like much of a compliment to say that Matt Parker’s new popular mathematics book made me feel like Dougal, but fans of Graham Linehan’s masterpiece will understand. I mean that I felt very well looked after, and, in all my ignorance, handled with a saint-like patience.

Calculating the size of an object from its spatial position has tried finer minds than Dougal’s. A long virtuoso passage early on in Love Triangle enumerates the half-dozen stages of inductive reasoning required to establish the distance of the largest object in the universe — a feature within the cosmic web of galaxies called The Giant Ring. Over nine billion light years away, the Giant Ring still occupies 34.5 degrees of the sky: now that’s what I call big and far away.

Measuring it has been no easy task, and yet the first, foundational step in the calculation turns out to be something as simple as triangulating the length of a piece of road.

“Love Triangle”, as no one will be surprised to learn, is about triangles. Triangles were invented (just go along with me here) in ancient Egypt, where the regularly flooding river Nile obliterated boundary markers for miles around and made rural land disputes a tiresome inevitability. Geometry, says the historian Herodotus around 430 BC, was invented to calculate the exact size of a plot of land. We’ve no reason to disbelieve him.

Parker spends a good amount of time demonstrating the practical usefulness of basic geometry, that allows us to extract the shape and volume of triangular space from a single angle and the length of a single side. At one point, on a visit to Tokyo, he uses a transparent ruler and a tourist map to calculate the height of the city’s tallest tower, the SkyTree.

Having shown triangles performing everyday miracles, he then tucks into their secret: “Triangles,” he explains, “are in the sweet spot of having enough sides to be a physical shape, while still having enough limitations that we can say generalised and meaningful things about them.” Shapes with more sides get boring really quickly, not least because they become so unwieldy in higher dimensions, which is where so many of the joys of real mathematics reside.

Adding dimensions to triangles adds just one corner per dimension. A square, on the other hand, explodes, doubling its number of corners with each dimension. (A cube has eight.) This makes triangles the go-to shape for anyone who wants to assemble meshes in higher dimensions. All sorts of complicated paths are brought within computational reach, making possible all manner of civilisational triumphs, including (but not limited to) photorealistic animations.

So many problems can be cracked by reducing them to triangles, there is an entire mathematical discipline, trigonometry, concerned with the relationships between their angles and side lengths. Parker’s adventures on the spplied side of trigonometry become, of necessity, something of a blooming, buzzing confusion, but his anecdotes are well judged and lead the reader seamlessly into quite complex territory. Ever wanted to know how Kathleen Lonsdale applied Fourier transforms to X-ray waves, making possible Rosalind Franklin’s work on DNA structure? Parker starts us off on that journey by wrapping a bit of paper around a cucumber and cutting it at a slant. Half a dozen pages later, we may not have the firmest grasp of what Parker calls the most incredible bit of maths most people have never heard of, but we do have a clear map of what we do not know.

Whether Parker’s garrulousness charms you or grates on you will be a matter of taste. I have a pious aversion to writers who feel the need to cheer their readers through complex material every five minutes. But it’s hard not to tap your foot to cheap music, and what could be cheaper than Parker’s assertion that introducing coordinates early on in a maths lesson “could be considered ‘putting Descartes before the course’”?

Parker has a fine old time with his material, and only a curmudgeon can fail to be charmed by his willingness to call Heron’s two-thousand-year-old formula for finding the area of a triangle “stupid” (he’s not wrong, neither) and the elongated pentagonal gyrocupolarotunda a “dumb shape”.