What about the unknown knowns?

Reading Nate Silver’s On the Edge and The Art of Uncertainty by David Spiegelhalter for the Spectator

The Italian actuary Bruno de Finetti, writing in 1931, was explicit: “Probability,” he wrote, “does not exist.”

Probability, it’s true, is simply the measure of an observer’s uncertainty, and in The Art of Uncertainty British statistician David Spiegelhalter explains how his extraordinary and much-derided science has evolved to the point where it is even able to say useful things about why things have turned out the way they have, based purely on present evidence. Spiegelhalter was a member of the Statistical Expert Group of the 2018 UK Infected Blood Inquiry, and you know his book’s a winner the moment he tells you that between 650 and 3,320 people nationwide died from tainted transfusions. By this late point, along with the pity and the horror, you have a pretty good sense of the labour and ingenuity that went into those peculiarly specific, peculiarly wide-spread numbers.

At the heart of Spiegelhalter’s maze, of course, squats Donald Rumsfeld, once pilloried for his convoluted syntax at a 2002 Department of Defense news briefing, and now immortalised for what came out of it: the best ever description of what it’s like to act under conditions of uncertainty. Rumsfeld’s “unknown unknowns” weren’t the last word, however; Slovenian philosopher Slavoj Žižek (it had to be Žižek) pointed out that there are also “unknown knowns” — “all the unconscious beliefs and prejudices that determine how we perceive reality.”

In statistics, something called Cromwell’s Rule cautions us never to bed absolute certainties (probabilities of 0 or 1) into our models. Still, “unknown knowns” fly easily under the radar, usually in the form of natural language. Spiegelhalter tells how, in 1961, John F Kennedy authorised the invasion of the Bay of Pigs, quite unaware of the minatory statistics underpinning the phrase “fair chance” in an intelligence briefing.

From this, you could draw a questionable moral: that the more we quantify the world, the better our decisions will be. Nate Silver — poker player, political pundit and author of 2012’s The Signal and the Noise — finds much to value in this idea. On the Edge, though, is more about the unforeseen consequences that follow.

There is a sprawling social ecosystem out there that Silver dubs “the River”, which includes “everyone from low-stakes poker pros just trying to grind out a living to crypto kings and adventure-capital billionaires.” On the Edge is, among many other things, a cracking piece of popular anthropology.

Riverians accept that it is very hard to be certain about anything; they abandon certainty for games of chance; and they end up treating everything as a market to be played.

Remember those chippy, cheeky chancers immortalised in films like 21 (2008: MIT’s Blackjack Team takes on Las Vegas) and Moneyball (2011: a young economist up-ends baseball)?

More than a decade has passed, and they’re not buccaneers any more. Today, says Silver, “the Riverian mindset is coming from inside the house.”

You don’t need to be a David Spiegelhalter to be a Riverian. All you need is the willingness to take bets on very long odds.

Professional gamblers learn when and how how to do this, and this is why that subset of gamblers called Silicon Valley venture capitalists are willing to back wilful contrarians like Elon Musk (on a good day) and (on a bad day) Ponzi-scheming crypto-crooks like Sam Bankman-Fried.

Success as a Riverian isn’t guaranteed. As Silver points out, “a lot of the people who play poker for a living would be better off — at least financially — doing something else.” Then again, those who make it in the VC game expect to double their money every four years. And those who find they’ve backed a Google or a SpaceX can find themselves living in a very odd world indeed.

Recently the billionaire set has been taking an interest and investing in “effective altruism”, a hyper-utilitarian dish cooked up by Oxford philosopher Will MacAskill. “EA” promises to multiply the effectiveness of acts of charity by studying their long-term effectiveness — a approach that naturally appeals to minds focused on quantification. Silver describes the state of the current movement, “stuck in the uncanny valley between being abstractly principled and ruthlessly pragmatic, with the sense that you can kind of make it up as you go along”. Who here didn’t see that one coming? Most of the original EA set now spend their time agonising over the apocalyptic potential of artificial intelligence.

The trick to Riverian thinking is to decouple things, in order to measure their value. Rather than say, “The Chick-fil-A CEO’s views on gay marriage have put me off my lunch,” you say, “the CEO’s views are off-putting, but this is a damn fine sandwich — I’ll invest.”

That such pragmatism might occasionally ding your reputation, we’ll take as read. But what happens when you do the opposite, glomming context after context onto every phenomenon in pursuit of some higher truth? Soon everything becomes morally equivalent to everything else and thinking becomes impossible.

Silver mentions a December 2023 congressional hearing in which the tone-deaf presidents of Harvard, Penn and MIT, in their sophomoric efforts to be right about all things all at once all the time, managed to argue their way into anti-Semitism. (It’s on YouTube if you haven’t seen it already. The only thing I can compare it to is how The Fast Show’s unlucky Alf used to totter invariably toward the street’s only open manhole.) No wonder that the left-leaning, non-Riverian establishment in politics and education is becoming, in Silver’s words, “a small island threatened by a rising tide of disapproval.”

We’d be foolish in the extreme to throw in our lot with the Riverians, though: people whose economic model reduces to: Bet long odds on the hobby-horses of contrarian asshats and never mind what gets broken in the process.

If we want a fairer, more equally apportioned world, these books should convince us that we should be spending less time worrying about what people are thinking, and concern ourselves more with how people are thinking.

We cannot afford to be ridden by unknown knowns.

 

“Fears about technology are fears about capitalism”

Reading How AI Will Change Your Life by Patrick Dixon and AI Snake Oil by Arvind Narayanan and Sayash Kapoor, for the Telegraph

According to Patrick Dixon, Arvind Narayanan and Sayash Kapoor, artificial intelligence will not bring about the end of the world. It isn’t even going to bring about the end of human civilisation. It’ll struggle even to take over our jobs. (If anything, signs point to a decrease in unemployment.)

Am I alone in feeling cheated here? In 2014, Stephen Hawking said we were doomed. A decade later, Elon Musk is saying much the same. Last year, Musk and other CEOs and scientists signed an open letter from the Future of Life Institute, demanding a pause on giant AI experiments.

But why listen to fiery warnings from the tech industry? Of 5,400 large IT projects (for instance, creating a large data warehouse for a bank) recorded by 2012 in a rolling database maintained by McKinsey, nearly half went over budget, and over half under-delivered. In How AI Will Change Your Life, author and business consultant Dixon remarks, “Such consistent failures on such a massive scale would never be tolerated in any other area of business.” Narayanan and Kapoor, both computer scientists, say that academics in this field are no better. “We probably shouldn’t care too much about what AI experts think about artificial general intelligence,” they write. “AI researchers have often spectacularly underestimated the difficulty of achieving AI milestones.”

These two very different books want you to see AI from inside the business. Dixon gives us plenty to think about: AI’s role in surveillance; AI’s role in intellectual freedom and copyright; AI’s role in warfare; AI’s role in human obsolescence – his exhaustive list runs to over two dozen chapters. Each of these debates matter, but we would be wrong to think that they are driven by, or were even about, technology at all. Again and again, they are issues of money: about how production gravitates towards automation to save labour costs; or about how AI tools are more often than not used to achieve imaginary efficiencies at the expense of the poor and the vulnerable. Why go to the trouble of policing poor neighbourhoods if the AI can simply round up the usual suspects? As the science-fiction writer Ted Chiang summed up in June 2023, “Fears about technology are fears about capitalism.”

As both books explain, there are three main flavours of artificial intelligence. Large language models power chatbots, of which GPT-4, Gemini and the like will be most familiar to readers. They are bullshitters, in the sense that they’re trained to produce plausible text, not accurate information, and so fall under philosopher Harry Frankfurt’s definition of bullshit as speech that is intended to persuade without regard for the truth. At the moment they work quite well, but wait a year or two: as the internet fills with AI-generated content, chatbots and their ilk will begin to regurgitate their own pabulum, and the human-facing internet will decouple from truth entirely.

Second, there are AI systems whose superior pattern-matching spots otherwise invisible correlations in large datasets. This ability is handy, going on miraculous, if you’re tackling significant, human problems. According to Dixon, for example, Klick Labs in Canada has developed a test that can diagnose Type 2 diabetes with over 85 per cent accuracy using just a few seconds of the patient’s voice. Such systems have proved less helpful, however, in Chicago. Narayanan and Kapoor report how, lured by promises of instant alerts to gun violence, the city poured nearly 49 million dollars into ShotSpotter, a system that has been questioned for its effectiveness after police fatally shot a 13-year-old boy in 2021.

Last of the three types is predictive AI: the least discussed, least successful, and – in the hands of the authors of AI Snake Oil (4 STARS) – by some way the most interesting. So far, we’ve encountered problems with AI’s proper working that are fixable, at least in principle. With bigger, better datasets – this is the promise – we can train AI to do better. Predictive AI systems are different. These are the ones that promise to find you the best new hires, flag students for dismissal before they start to flounder, and identify criminals before they commit criminal acts.

They won’t, however, because they can’t. Drawing broad conclusions about general populations is often the stuff of social science, and social science datasets tend to be small. But were you to have a big dataset about a group of people, would AI’s ability to say things about the group let it predict the behaviour of one of its individuals? The short answer is no. Individuals are chaotic in the same way as earthquakes are. It doesn’t matter how much you know about earthquakes; the one thing you’ll never know is where and when the next one will hit.

How AI Will Change Your Life is not so much a book as a digest of bullet points for a PowerPoint presentation. Business types will enjoy Dixon’s meticulous lists and his willingness to argue both sides against the middle. If you need to acquire instant AI mastery in time for your next board meeting, Dixon’s your man. Being a dilettante, I will stick with Narayanan and Kapoor, if only for this one-liner, which neatly captures our confused enthusiasm for little black boxes that promise the world. “It is,” they say, “as if everyone in the world has been given the equivalent of a free buzzsaw.”

 

 

Doing an Elizabeth

Coralie Fargeat’s The Substance inspired this Telegraph article about copies and clones

Hollywood has-been Elisabeth Sparkle didn’t look where she was going, and got badly shaken about in a traffic accident. Now she’s in the emergency room, and an unfeasibly handsome young male nurse is running his fingers down her spine. Nothing’s wrong. On the contrary: Elisabeth (played by Demi Moore) is, she’s told, “a perfect candidate”.

The next day she gets a box through the post. Inside is a kit that will enable her to duplicate herself. The instructions couldn’t be clearer. Even when fully separated, Elisabeth and the younger, better version of herself who’s just spilled amniotically out of her back (Sue, played by Margaret Qualley) are one. While one of them gets to play in the sun for a week, the other must lie in semi-coma, feeding off an intravenous drip. Each week, they swap roles.

Writer-director Coralie Fargeat’s script for The Substance is one of those super-lucid cinematic fun-rides that can’t help but put you in mind of other, admittedly rather better movies. In Joe Mankiewicz’s All About Eve (1950), an actress’s personal assistant plots to steal her career. In John Frankenheimer’s Seconds (1966), Rock Hudson gets his youth back and quickly learns to hate it. In David Cronenberg’s The Fly (1986) biologist Seth Brundle’s experiment in gene splicing is a none-too-subtle metaphor for the ageing process.

Recently, I ran into a biotechnology company called StoreGene. They sent me a blood sample kit in a little box and promised me a lifetime of personalised medicine, so long as I let them read my entire genetic code.

I’m older than Elisabeth Sparkle (sacked from her daytime TV fitness show on her 50th birthday) and a sight less fit than Demi Moore, and so I seized StoreGene’s offer with both palsied, liver-spotted hands.

Now, somewhere in what we call the Cloud (some anonymous data centre outside Chicago, more like) I have a double. Unlike Elizabeth’s Sue, though, my double won’t resent the fact that I am using him as a means. He is not going to flinch, or feel violated in any way, as his virtual self is put through trial after trial.

Every year, more than a million medical research papers are published. It’s impossible to know what this deluge of new discovery means to me personally – but now my GP can find out, at the push of a button, what it means for my genetic data-double.

Should I take this medicine, or that? Should I take more of it, or less of it? What treatment will work; what won’t? No more uncertainty for me: now I am guaranteed to receive treatments that are tailored to me, forever. I’ve just landed, bang, in the middle of a new era of personalised medicine.

Now that there’s a digital clone of me floating around, I have even less reason to want to “do an Elisabeth” and make a flesh-and-blood copy of myself. This will come as a relief to anyone who’s read Kazuo Ishiguro’s 2005 novel Never Let Me Go, and can’t shake off the horror occasioned by that school assembly: “If you’re going to have decent lives,” Miss Lucy tells the children in her care, “then you’ve got to know and know properly… You’ll become adults, then before you’re old, before you’re even middle-aged, you’ll start to donate your vital organs.”

Might we one day farm clones of ourselves to provide our ageing, misused bodies with spare parts? This is by far the best of the straw-man arguments that have been mounted over the years against the idea of human cloning. (Most of the others involve Hitler.)

It at least focuses our minds on a key ethical question: are we ever entitled to use other people as means to an end? But it’s still a straw-man argument, not least because we’re a long way into figuring out how to grow our spare organs in other animals. No ethical worries there! (though the pigs may disagree).

And while such xenotransplantation and other technologies advance by leaps and bounds, reproductive cloning languishes – a rather baroque solution to biomedical problems solved more easily by other means.

Famously, In 1996 Ian Wilmut and colleagues at the Roslin Institute in Scotland successfully cloned Dolly the sheep from the udder cells of a ewe. Dolly was their 277th attempt. She died young. No-one can really say whether this had anything to do with her being a clone, since her creation conspicuously did not open the floodgates to further experimentation. Two decades went by before the first primates were successfully cloned – two crab-eating macaques named Zhong Zhong and Hua Hua. These days it’s possible to clone your pet (Barbara Streisand famously cloned her dog), but my strong advice is, don’t bother: around 96 per cent of all cloning attempts end in failure.

Science-fiction stories, from Aldous Huxley’s Brave New World (1932) to Andrew Niccol’s Gattaca (1997), have conjured up hyper-utilitarian nightmares in which manipulations of the human genome work all too well. This is what made David Cronenberg’s early body horror so compelling and, in retrospect, so visionary: in films such as 1977’s Rabid (a biker develops a blood-sucking orifice) and 1979’s The Brood (ectopic pregnancies manifest a divorcée’s rage), the body doesn’t give a stuff about anyone’s PhD; it has its own ideas about what it wants to be.

And so it has proved. Not only does cloning rarely succeed; the clone that manages to survive to term will most likely be deformed, or die of cancer, or keel over for some other more or less mysterious reason. After cloning Dolly the sheep, Wilmut and his team tried to clone another lamb; it hyperventilated so much it kept passing out.

***

It is conceivable, I suppose, that hundreds of years from now, alien intelligences will dust off StoreGene’s recording of my genome and, in a fit of misplaced enthusiasm, set about growing a copy of me in a modishly lit plexiglass tank. Much good may it do them: the clone they’re growing will bear only a passing physical resemblance to me, and he and I will share only the very broadest psychological and emotional similarity. Genes make a big contribution to the development process, but they’re not in overall charge of it. Even identical twins, nature’s own clones, are easy to tell apart, especially when they start speaking.

Call me naive, but I’m not too worried about vast and cool and unsympathetic intellects, alien or otherwise, getting hold of my genetic data. It’s the thought of what all my other data may be up to that keeps me up at night.

Swedish political scientist Carl Öhman’s The Afterlife of Data, published earlier this year, recounts the experiences of a young man who, having lost his father ten years previously, finds that they can still compete against each other on an old XBox racing game. That is, he can play against his father’s saved games, again and again. (Of course he’s now living in dread of the day the XBox eventually breaks and his dad dies a second time.)

The digital world has been part of our lives for most of our lives, if not all of them. We are each of us mirrored there. And there’s this in common between exploring digital technology and exploring the Moon: no wind will come along to blow away our footprints.

Öhman’s book is mostly an exploration of the unstable but fast-growing sector of “grieving technologies” which create – from our digital footprints – chatbots, which our grieving loved ones can interrogate on those long lonely winter evenings. Rather more uncanny, to my mind, are those chatbots of us that stalk the internet while we’re still alive, causing trouble on our behalf. How long will it be before my wife starts ringing me up out of the blue to ask me the PIN for our joint debit card?

Answer: in no time at all, at least according to a note on “human machine teaming” published six (six!) years ago by the Ministry of Defence. Its prediction that “forgeries are likely to constitute a large proportion of online content” was stuffily phrased, but accurate enough: in 2023 nearly half of all internet traffic came from bots.

At what point does a picture of yourself acquire its own reality? At what point does that picture’s existence start ruining your life? Oscar Wilde took a stab at what in 1891 must have seemed a very noodly question with his novel The Picture of Dorian Gray. 130-odd years later, Sarah Snook’s one-woman take on the story at London’s Haymarket Theatre employed digital beauty filters and mutiple screens in what felt less like an updating of Wilde’s story, more its apocalyptic restatement: all lives end, and a life wholly given over to the pursuit of beauty and pleasure is not going to end well.

In 2021, users of TikTok noticed that the platform’s default front-facing camera was slimming down their faces, smoothing their skin, whitening their teeth and altering the size of their eyes and noses. (You couldn’t disable this feature, either.) When you play with these apps, you begin to appreciate their uncanny pull. I remember the first time TikTok’s “Bold Glamour” filter, released last year, mapped itself over my image with an absolute seamlessness. Quite simply, a better me appeared in the phone’s digital mirror. When I gurned, it gurned. When I laughed, it laughed. It had me fixated for days and, for heaven’s sake, I’m a middle-aged bloke. Girls, you’re the target audience here. If you want to know what your better selves are up to, all you have to do is look into your smartphone.

Better yet, head to a clinic near you (while there are still appointments available), get your fill of fillers, and while your face is swelling like an Aardman Animations outtake, listen in as practitioners of variable experience and capacity talk glibly of “Zoom-face dysphoria”.
That this self-transfiguring trend has disfigured a generation is not really the worry. The Kardashian visage (tan by Baywatch, brows and eye shape by Bollywood, lips from Atlanta, cheeks from Pocahontas, nose from Gwyneth Paltrow) is a mostly non-surgical artefact – a hyaluronic-acid trip that will melt away in six months to a year, once people come to their senses. What really matters is that among school-age girls, rates of depression and self-harm are through the roof. I had a whale of a time at that screening of The Substance. But the terrifying reality is that the film isn’t for me; it’s for them.

Malleable meat

Watching Carey Born’s Cyborg: A Documentary for New Scientist

Neil Harbisson grew up in Barcelona and studied music composition at Dartington College of Arts in the UK. He lives with achromatism: he is unable to perceive colour of any kind. Not one to ignore a challenge, in 2003 Harbisson recruited product designer Adam Montandon to build him a head-mounted rig that would turn colours into musical notes that he could listen to through earphones. Now in his forties, Harbisson has evolved. The camera on its pencil-thin stalk and the sound generator are permanently fused to the back of his skull: he hears the colours around him through bone conduction.

If “hears” is quite the word: Watching Carey Born’s Cyborg: A Documentary, we occasionally catch Harbisson thinking seriously and intelligently about how the senses operate. He doesn’t hear colour so much as see it. His unconventional colour organ is startling to outsiders — what is that chap doing with an antenna springing out the back of his head? But Harbisson’s brain is long used to the antenna’s input, and treats it like any other visual information. Harbisson says he knew his experiment was a success when he started to dream in colour.

Body modification in art has a long history, albeit a rather vexed one. I can remember the Australian performance artist Stelarc hanging from flesh hooks, pronouncing on the obscolescence of the body. (My date did not go well.) Stelarc doesn’t do that sort of thing any more. Next year he celebrates his eightieth birthday. You can declare victory over the flesh as much as you like: time gets the last laugh.

The way Harbisson has hacked his own perceptions leaves him with very little to do but talk about his experiences. He can’t really demonstrate them the way his partner Moon Ribos can. The dancer-choreographer has had an internet-enabled vibrating doo-dad fitted in her left arm which, when she’s dancing, tells her when and how vigorously to respond to earthquakes.

Harbisson meanwhile is stuck in radio studios and behind lecterns explaining what it’s like to have a friends send the colours of Australian sunset to the back of his skull — to which a radio talk-show guest objects: Wouldn’t receiving a postcard of an Australian sunset amount to the same thing?

Born’s uncritical approach to her subject never really digs in to this perfectly sensible question — and this is a pity. Harbisson says he has weathered months-long headaches and episodes of depression in an effort to extend his senses, but all outsiders ever care about is the tech, and what it can do.

One recent wheeze from Harbisson and his collaborators is a headband that tells you the time by heating spots on your skull. Obviously a watch offers a more accurate measure. Less obviously, the headband is supposed to create a new sense in the wearer: an embodied, pre-conscious awareness of solar-planetary motion. The technology is fun, but what really matters is what new senses may be out there for us to enjoy.

I find it slighly irksome to be having to explain Harbisson’s work, since Harbisson hardly bothers. The lecture, the talk-show, the panel and the photoshoot are his gallery and stage, and for over twenty years now, the man with the stalk coming out of his head has been giving his audience what they have come to expect: a ringing endorsement of transhumanism, the philosophy that would have us treat our bodies as so much malleable meat. In 2010 he co-founded the Cyborg Foundation to defend cyborg rights. In 2017, he co-founded the Transpecies Society to give a voice to people with non-human identities. It’s all very idealistic and also quite endearingly old-fashioned in its otherworldliness — as though the plasticity or otherwise of the body were not already a burning social issue, and staple ordnance in today’s culture wars.

I wish Born had gone to the bother of challenging her subject. Penetrate their shell of schooled narcissism and you occasionally find that conceptual artists have something to say.

Not even wrong

Reading Yuval Noah Harari’s Nexus for the Telegraph

In his memoirs, the German-British physicist Rudolf Peierls recalls the sighing response his colleague Wolfgang Pauli once gave to a scientific paper: “It is not even wrong.”

Some ideas are so incomplete, or so vague, that they can’t even be judged. Yuval Noah Harari’s books are notoriously full of such ideas. But then, given what Harari is trying to do, this may not matter very much.

Take this latest offering: a “brief history” that still finds room for viruses and Neanderthals, The Talmud and Elon Musk’s Neuralink and the Thirty Years’ War. Has Harari found a single rubric, under which to combine all human wisdom and not a little of its folly? Many a pub bore has entertained the same conceit. And Harari is tireless: “To appreciate the political ramifications of the mind–body problem,” Harari writes, “let’s briefly revisit the history of Christianity.” Harari is a writer who’s never off-topic but only because his topic is everything.

Root your criticism of Harari in this, and you’ve missed the point, which is that he’s writing this way on purpose. His single goal is to give you a taste of the links between things, without worrying too much about the things themselves. Any reader old enough to remember James Burke’s idiosyncratic BBC series Connections will recognise the formula, and know how much sheer joy and exhilaration it can bring to an audience that isn’t otherwise spending every waking hour grazing the “smart thinking” shelf at Waterstone’s.

Well-read people don’t need Harari.

Nexus’s argument goes like this: civilisations are (among other things) information networks. Totalitarian states centralise their information, which grows stale as a consequence. Democracies distribute their information, with checks and balances to keep the information fresh.

Harari’s key point here is that in neither case does the information have to be true. A great deal of it is not true. At best it’s intersubjectively true (Santa Claus, human rights and money are real by consensus: they have no basis in the material world.) Quite a lot of our information is fiction, and a fraction of that fiction is downright malicious falsehood.

It doesn’t matter to the network, which uses that information more or less agnostically, to establish order. Nor is this necessarily a problem, since an order based on truth is likely to be a lot more resilient and pleasant to live under than an order based on cultish blather.

This typology gives Harari the chance to wax lyrical over various social and cultural arrangements, historical and contemporary. Marxism and populism both get short shrift, in passages that are memorable, pithy, and, dare I say it, wise.

In the second half of the book, Harari invites us to stare like rabbits into the lights of the on-coming AI juggernaut. Artificial intelligence changes everything, Harari says, because just as human’s create inter-subjective realities, computers create inter-computer realities. Pokemon Go is an example of an intercomputer reality. So — rather more concerningly — are the money markets.

Humans disagree with each other all the time, and we’ve had millennia to practice thinking our way into other heads. The problem is that computers don’t have any heads. Their intelligence is quite unlike our own. We don’t know what They’re thinking because, by any reasonable measure, “thinking” does not describe what They are doing.

Even this might not be a problem, if only They would stop pretending to be human. Harari cites a 2022 study showing that the 5 per cent of Twitter users that are bots are generating between 20 and 30 per cent of the site’s content.

Harari quotes Daniel Dennett’s blindingly obvious point that, in a society where information is the new currency, we should ban fake humans the way we once banned fake coins.

And that is that, aside from the shouting — and there’s a fair bit of that in the last pages, futurology being a sinecure for people who are not even wrong.

Harari’s iconoclastic intellectual reputation is wholly undeserved, not because he does a bad job, but because he does such a superb job of being the opposite of an iconoclast. Harari sticks the world together in a gleaming shape that inspires and excites. If it holds only for as long as it takes to read the book, still, dazzled readers should feel themselves well served.

Just which bits of the world feel human to you?

Reading Animals, Robots, Gods by Webb Keane for New Scientist

No society we know of ever lived without morals. Roughly the same ethical ideas arise, again and again, in the most diverse societies. Where do these ideas of right and wrong come from? Might there be one ideal way to live?

Michigan-based anthropologist Webb Keane argues that morality does not arise from universal principles, but from the human imagination. Moral ideas are sparked in the friction between objectivity, when we think about the world as if it were a story, and subjectivity, in which we’re in some sort of conversation with the world.

A classic trolley problem eludicates Keane’s point. If you saw an out-of-control trolley (tram car) hurtling towards five people, and could pull a switch that sent the trolley down a different track, killing only one innocent bystander, you would more than likely choose to pull the lever. If, on the other hand, you could save five people by pushing an innocent bystander into the path of the trolley (using him, in Keane’s delicious phrase, “as an ad hoc trolley brake”), you’d more than likely choose not to interfere. The difference in your reaction turns on whether you are looking at the situation objectively, at some mechanical remove, or whether you subjectively imagine yourself in the thick of the action.

What moral attitude we adopt to situations depends on how socially charged we think they are. I’d happily kick a stone down the road; I’d never kick a dog. Where, though, are the boundaries of this social world? If you can have a social relationship with your pet dog, can you have one with your decorticate child? Your cancer tumour? Your god?

Keane says that it’s only by asking such questions that we acquire morals in the first place. And we are constantly trying to tell the difference between the social and the non-social, testing connections and experimenting with boundaries, because the question “just what is a human being, anyway?” lies at the heart of all morality.

Readers of Animals, Robots, Gods will encounter a wide range of non-humans, from sacrificial horses to chatbots, with whom they might conceivably establish a social relationship. Frankly, it’s too much content for so short a book. Readers interested in the ethics of artifical intelligence, for instance, won’t find much new insight here. On the other hand, I found Keane’s distillation of fieldwork into the ethics of hunting and animal sacrifice both gripping and provoking.

We also meet humans enhanced and maintained by technology. Keane reports a study by anthropologist Cheryl Mattingly in which devout Los Angles-based Christians Andrew and Darlene refuse to turn off the machines keeping their brain-dead daughter alive. The doctors believe that, in the effort to save her, their science has at last cyborgised the girl to the point at which she is no longer a person. The parents believe that, medically maintained or not, cognate or not, their child’s being alive is significant, and sufficient to make her a person. This is hardly some simplistic “battle between religion and science”. Rather, it’s an argument about where we set the boundaries within which we apply moral imperatives like the one telling us not to kill.

Morals don’t just guide lived experience: they arise from lived experience. There can be no trolley problems without trolleys. This, Keane argues, is why morality and ethics are best approached from an anthropological perspective. “We cannot make sense of ethics, or expect them of others, without understanding what makes them inhabitable, possible ways to live,” he writes. “And we should neither expect, nor, I think, hope that the diversity of ways of life will somehow converge onto one ‘best’ way of living.”

We communicate best with strangers when we accept them as moral beings. A western animal rights activist would never hunt an animal. A Chewong hunter from Malaysia wouldn’t dream of laughing at one. And if these strangers really want to get the measure of each other, they should each ask the same, devastatingly simple question:

Just which bits of the world feel human to you?

A citadel beset by germs

Watching Mariam Ghani’s Dis-Ease for New Scientist

There aren’t many laugh-out-loud moments in Mariam Ghani’s long documentary about our war on germs. The sight of two British colonial hunters in Ceylon bringing down a gigantic papier maché mosquito is a highlight.

Ghani intercuts public information films (a rich source of sometimes inadvertent comedy) with monster movies, documentaries, thrillers, newreel and histology lab footage to tell the story of an abiding medical metaphor: the body as citadel, beset by germs.

Dis-Ease, which began life as an artistic residency at the Wellcome Institute, is a visual feast, with a strong internal logic. Had it been left to stand on its own feet, then it might have borne comparison with Godfrey Reggio’s Koyaanisqatsi and Simon Pummell’s Bodysong: films which convey their ideas in purely visual terms.

But the Afghan-American photographer Ghani is as devoted to the power of words. Interviews and voice-overs abound. The result is a messy collision of two otherwise perfectly valid documentary styles.

There’s little in Dis-Ease’s narrative to take exception to. Humoral theory (in which the sick body falls out of internal balance) was a central principle in Western medicine from antiquity into the 19th century. It was eventually superseded by germ theory, in which the sick body is assailed by pathogens. Germ theory enabled globally transformative advances in public health, but it was most effectively conveyed through military metaphors, and these quickly acquired a life of their own. In its brief foray into the history of eugenics, Dis-Ease reveals, in stark terms, how “wars on disease” mutate into wars on groups of people.

A “war on disease” also preserves and accentuates social inequities, the prevailing assumption being that outbreaks spread from the developing south to the developed north, and the north then responds by deploying technological fixes in the opposite direction.

At its very founding in 1948, the World Health Organisation argued against this idea, and the eradication of smallpox in 1980 was achieved through international consensus, by funding primary health care across the globe. The attempted eradication of polio, begun in 1988, has been a deal more problematic, and the film argues that this is down to the developed world’s imposition by fiat of a very narrow medical brief, even as health care services in even the poorest countries were coming under pressure to privitise.

Ecosystems are being eroded, and zoonotic diseases are emerging with ever greater frequency. Increasingly robust and well-coördinated military responses to frightening outbreaks are understandable and they can, in the short term, be quite effective. For example: to criticise the way British and Sierra Leonean militaries intervened in Sierra Leone in 2014 to establish a National Ebola Response Centre would be to put ideology in the way of common sense.

Still, the film argues, such actions may worsen problems on the ground, since they absorb all the money and political will that might have been spent on public health necessities like housing and sanitation (and a note to Bond villians here: the surest way to trigger a global pandemic is to undermine the health of some small exposed population).

In interview, the sociologist Hannah Landecker points out that since adopting germ theory, we have been managing life with death. (Indeed, that is pretty much exactly what the word “antibiotic” means.) Knowing what we know now about the sheer complexity and vastness of the microbial world, we should now be looking to manage life with life, collaborating with the microbiome, ensuring health rather than combating disease.

What this means exactly is beyond the scope of Ghani’s film, and some of the gestures here towards a “one health” model of medicine — as when a hippy couple start repeating the refrain “life and death are one” — caused this reviewer some moral discomfort.

Anthropologists and sociologists dominate Dis-Ease’s discourse, making it a snapshot of what today’s generation of desk-bound academics think about disease. Many speak sense, though a special circle of Hell is being reserved for the one who, having read too much science fiction, glibly asserts that we can be cured “by becoming something else entirely”.

If they’re out there, why aren’t they here?

The release of Alien: Romulus inspired this article for the Telegraph

On August 16, Fede Alvarez returns the notorious Alien franchise to its monster-movie roots, and feeds yet another batch of hapless young space colonists to a nest of “xenomorphs”.
Will Alien: Romulus do more than lovingly pay tribute to Ridley Scott’s original 1979 Alien? Does it matter? Alien is a franchise that survives despite the additions to its canon, rather than because of them. Bad outings have not bankrupted its grim message, and the most visionary reimaginings have not altered it.

The original Alien is itself a scowling retread of 1974’s Dark Star, John Carpenter’s nihilist-hippy debut, about the crew of an interstellar wrecking crew cast unimaginably far from home, bored to death and intermittently terrorised by a mischievous alien beach ball. Dan O’Bannon co-wrote both Dark Star and Alien, and inside every prehensile-jawed xenomorph there’s a O’Bannonesque balloon critter snickering away.

O’Bannon’s cosmic joke goes something like this: we escaped the food-chain on Earth, only to find ourselves at the bottom of an even bigger, more terrible food chain Out There among the stars.

You don’t need an adventure in outer space to see the lesson. John Carpenter went on to make The Thing (1982), in which the intelligent and resourceful crew of an Antarctic base are reduced to chum by one alien’s peckishness.

You don’t even need an alien. Jaws dropped the good folk of Amity Island NY back into the food chain, and that pre-dated Alien by four years.

Alien, according to O’Bannon’s famous pitch-line, was “like Jaws in space”, but by moving the action into space, it added a whole new level of existential dread. Alien shows us that if nature is red in tooth and claw here on Earth, then chances are it will likely be so up there. The heavens cannot possibly be heavenly: now here was an idea calculated to strike fear in fans of 1982’s ET the Extra-Terrestrial.

In ET, intelligence counts – the visiting space traveller is benign because it is a space traveller. Any species smart enough to travel among the stars is also smart enough not to go around gobbling up the neighours. Indeed, the whole point of space travel turns out to be botany and gardening.

Ridley Scott’s later Alien outings Prometheus (2012) and Covenant (2017) are, in their turn, muddled counter-arguments to ET; in them, cosmic gardeners called Engineers gleefully spread an invasive species (a black xenomorph-inducing dust) across the cosmos.

“But, for the love of God – why?” ask ET fans, their big trusting-kitten eyes tearing up at all this interstellar mayhem. And they have a point. Violence makes evolutionary sense when you have to compete over limited resources. The moment you journey among the stars, though, the resources available to you are to all intents and purposes infinite. In space, assuming you can navigate comfortably through it, there is absolutely no point in being hostile.

If the prospect of interstellar life has provided the perfect conditions for numerous Hollywood blockbusters, then the real-life hunt for aliens has had more mixed results. When Paris’s Exposition Universelle opened in 1900, it was full of wonders: the world’s largest telescope, a 45-metre-diameter “Cosmorama” (a sort of restaurant-cum-planetarium), and the announcement of a prize, offered by the ageing socialite Clara Gouget: 100,000 francs (£500,000 in today’s money) offered to the first person to contact an extraterrestrial species.

Extraterrestrials were not a strange idea by 1900. The habitability of other worlds had been discussed seriously for centuries, and proposals on how to communicate with other planets were mounting up: these projects involved everything from mirrors to trenches, lines of trees and earthworks visible from space.

What really should arrest our attention is the exclusion clause written into the prize’s small print. Communicating with Mars wouldn’t win you anything, since communications with Mars were already being established. Radio pioneers Nikolai Tesla and Guglielmo Marconi both reckoned they had received signals from outer space. Meanwhile Percival Lowell, a brilliant astronomer working at the very limits of optical science, had found gigantic irrigation works on the red planet’s surface: in his 1894 book he published clear visual evidence of Martian civilisation.

Half a century later, our ideas about aliens had changed. Further study of Mars and Venus had shown them to be lifeless, or as good as. Meanwhile the cosmos had turned out to be exponentially larger than anyone had thought in 1900. Larger – but still utterly silent.

***

In the summer of 1950, during a lunchtime conversation with fellow physicists Edward Teller, Herbert York and Emil Konopinski at Los Alamos National Laboratory in New Mexico, the Italian-American physicist Enrico Fermi finally gave voice to the problem: “Where is everybody?”

The galaxy is old enough that any intelligent species could already have visited every star system a thousand times over, armed with nothing more than twentieth-century rocket technology. Time enough has passed for galactic empires to rise and fall. And yet, when we look up, we find absolutely no evidence for them.

We started to hunt for alien civilisations using radio telescopes in 1960. Our perfectly reasonable attitude was: If we are here, why shouldn’t they be there? The possibilities for life in the cosmos bloomed all around us. We found that almost all stars have planets, and most of them have rocky planets orbiting the habitable zone around their stars. Water is everywhere: evidence exists for four alien oceans in our own solar system alone, on Saturn’s moon Enceladus and on Jupiter’s moons Europa, Ganymede and Callisto. On Earth, microbes have been found that can withstand the rigours of outer space. Large meteor strikes have no doubt propelled them into space from time to time. Even now, some of the hardier varieties may be flourishing in odd corners of Mars.

All of which makes the cosmic silence sill more troubling.

Maybe ET just isn’t interested in us. You can see why. Space travel has proved a lot more difficult to achieve than we expected, and unimaginably more expensive. Visiting even very near neighbours is next-to-impossible. Space is big, and it’s hard to see how travel-times, even to our nearest planets, wouldn’t destroy a living crew.

Travel between star systems is a whole other order of impossible. Even allowing for the series’ unpardonably dodgy physics, it remains an inconvenient truth that every time Star Trek’s USS Enterprise hops between star systems, the energy has to come from somewhere — is the Federation of United Planets dismantling, refining and extinguishing whole moons?

Life, even intelligent life, may be common throughout the universe – but then, each instance of it must live and die in isolation. The distances between stars are so great that even radio communication is impractical. Civilisations are, by definition, high-energy phenomena, and all high-energy phenomena burn out quickly. By the time we receive a possible signal from an extraterrestrial civilisation, that civilisation will most likely have already died or forgotten itself or changed out of all recognition.

It gets worse. The universe creates different kinds of suns as it ages. Suns like our own are an old model, and they’re already blinking out. Life like ours has already had its heyday in the cosmos, and one very likely answer to our question “Where is everybody?” is: “You came too late to the party”.

Others have posited even more disturbing theories for the silence. Cixin Liu is a Chinese science fiction novelist whose Hugo Award-winning The Three Body Problem (2008) recently teleported to Netflix. According to Liu’s notion of the cosmos as a ”dark forest”, spacefaring species are by definition so technologically advanced, no mere planet could mount a defence against them. Better, then, to keep silent: there may be wolves out there, and the longer our neighbouring star systems stay silent, the more likely it is that the wolves are near.

Russian rocket pioneer Konstantin Tsiolkovsky, who was puzzling over our silent skies a couple of decades before Enrico Fermi, was more optimistic. Spacefaring civilisations are all around us, he said, and (pre-figuring ET) they are gardening the cosmos. They understand what we have already discovered — that when technologically misatched civilisations collide, the consequences for the weaker civilisation can be catastrophic. So they will no more communicate with us, in our nascent, fragile, planet-bound state, than Spielberg’s extraterrestrial would over-water a plant.

In this, Tsiolkovsky’s aliens show unlikely self-restraint. The trouble with intelligent beings is that they can’t leave things well enough alone. That is how we know they are intelligent. Interfering with stuff is the point.

Writing in the 1960s and 1970s, the Soviet science fiction novelists and brothers Arkady and Boris Strugatsky argued — in novels like 1964’s Hard to Be a God — that the sole point of life for a spacefaring species would be to see to the universe’s well-being by nurturing sentience, consciousness, and even happiness. To which Puppen, one of their most engaging alien protagonists, grumbles: Yes, but what sort of consciousness? What sort of happiness? In their 1985 novel The Waves Extinguish the Wind, alien-chaser Toivo Glumov complains, “Nobody believes that the Wanderers intend to do us harm. That is indeed extremely unlikely. It’s something else that scares us! We’re afraid that they will come and do good, as they understand it!”

Fear, above all enemies, the ones who think they’re doing you a favour.

In the Strugatskys’ wonderfully paranoid Noon Universe stories, the aliens already walk among us, tweeking our history, nudging us towards their idea of the good life.

Maybe this is happening for real. How would you know, either way? The way I see it, alien investigators are even now quietly mowing their lawns in, say, Slough. They live like humans, laugh and love like humans; they even die like humans. In their spare time they write exquisite short stories about the vagaries of the human condition, and it hasn’t once occured to them (thanks to their memory blocks) that they’re actually delivering vital strategic intelligence to a mothership hiding behind the moon.

You can pooh-pooh my little fantasy all you want; I defy you to disprove it. That’s the problem, you see. Aliens can’t be discussed scientifically. They’re not a merely physical phenomena, whose abstract existence can be proved or disproved through experiment and observation. They know what’s going on around them, and they can respond accordingly. They’re by definition clever, elusive, and above all unpredicatble. The whole point of a having a mind, after all, is that you can be constantly changing it.

The Polish writer Stanislaw Lem had a spectacularly bleak solution to Fermi’s question that’s best articulated in his last novel, 1986’s Fiasco. By the time a civilisation is in a position to commmunicate with others, he argues, it’s already become hopelessly eccentric and self-involved. At best its individuals will be living in simulations; at worst, they will be fighting pyrhhic, planet-busting wars against their own shadows. In Fiasco, the crew of the Eurydice discover, too late, that they’re quite as fatally self-obsessed as the aliens they encounter.
We see the world through our own particular and peculiar evolutionary perspective. That’s the bottom line. We’re from Earth, and this gives us a very clear, very narrow idea of what life is and what intelligence looks like.

We out-competed our evolutionary cousins long ago, and for the whole of our recorded history, we’ve been the only species we know that sports anything like our kind of intelligence. We’ve only had ourselves to think about, and our long, lonely self-obsession may have sent us slightly mad. We’re not equipped to meet aliens – only mirrors of ourselves. Only angels. Only monsters.

And the xenomorphs lurking abord the Romulus are, worst luck, most likely in the same bind.

Life trying to understand itself

Reading Life As No One Knows It: The Physics of Life’s Emergence by Sara Imari Walker and  The Secret Life of the Universe by Nathalie A Cabrol, for the Telegraph

How likely is it that we’re not alone in the universe? The idea goes in and out of fashion. In 1600 the philosopher Giordano Bruno was burned at the stake for this and other heterdox beliefs. Exactly 300 years later the French Académie des sciences announced a prize for establishing communication with life anywhere but on Earth or Mars — since people already assumed that Martians did exist.

The problem — and it’s the speck of grit around which these two wildly different books accrete — is that we’re the only life we know of. “We are both the observer and the observation,” says Nathalie Cabrol, chief scientist at the SETI Institute in California and author of The Secret Life of the Universe, already a bestseller in her native France: “we are life trying to understand itself and its origin.”

Cabrol reckons this may be only a temporary problem, and there are two strings to her optimistic argument.

First, the universe seems a lot more amenable toward life than it used to. Not long ago, and well within living memory, we didn’t know whether stars other than our sun had planets of their own, never mind planets capable of sustaining life. The Kepler Space Telescope, launched in March 2009, changed all that. Among the wonders we’ve detected since — planets where it rains molten iron, or molten glass, or diamonds, or metals, or liquid rubies or sapphires — are a number of rocky planets, sitting in the habitable zones of their stars, and quite capable of hosting oceans on their surface. Well over half of all sun-like stars boast such planets. We haven’t even begun to quantify the possibility of life around other kinds of star. Unassuming, plentiful and very long-lived M-dwarf stars might be even more life-friendly.

Then there are the ice-covered oceans of Jupiter’s moon Europa, and Saturn’s moon Enceladus, and the hydrocarbon lakes and oceans of Saturn’s Titan, and Pluto’s suggestive ice volcanoes, and — well, read Cabrol if you want a vivid, fiercely intelligent tour of what may turn out to be our teeming, life-filled solar system.

The second string to Cabrol’s argument is less obvious, but more winning. We talk about life on Earth as if it’s a single family of things, with one point of origin. But it isn’t. Cabrol has spent her career hunting down extremophiles (ask her about volcano diving in the Andes) and has found life “everywhere we looked, from the highest mountain to the deepest abyss, in the most acidic or basic environments, the hottest and coldest regions, in places devoid of oxygen, within rocks — sometimes under kilometers of them — within salts, in arid deserts, exposed to radiation or under pressure”.

Several of these extremophiles would have no problem colonising Mars, and it’s quite possible that a more-Earth-like Mars once seeded Earth with life.

Our hunt for earth-like life — “life like ours” — always had a nasty circularity about it. By searching for an exact mirror of ourselves, what other possibilities were we missing? In The Secret Life Cabrol argues that we now know enough about life to hunt for radically strange lifeforms, in wildly exotic environments.

Sara Imari Walker agrees. In Life As No One Knows It, the American theoretical physicist does more than ask how strange life may get; she wonders whether we have any handle at all on what life actually is. All these words of ours — living, lifelike, animate, inanimate, — may turn out to be hopelessly parochial as we attempt to conceptualise the possibilities for complexity and purpose in the universe. (Cabrol makes a similar point: “Defining Life by describing it,” she fears, “as the same as saying that we can define the atmosphere by describing a bird flying in the sky.”

Walker, a physicist, is painfully aware that among the phenomena that current physics can’t explain are physicists — and, indeed, life in general. (Physics, which purports to uncover an underlying order to reality, is really a sort of hyper-intellectual game of whack-a-mole in which, to explain one phenomenon, you quite often have to abandon your old understanding of another.) Life processes don’t contradict physics. But physics can’t explain them, either. It can’t distinguish between, say, a hurricane and the city of New York, seeing both as examples of “states of organisation maintained far from equilibrium”.

But if physics can’t see the difference, physicists certainly can, and Walker is a fiercely articulate member of that generation of scientists and philosophers — physicists David Deutsch and Chiara Marletto and the chemist Leroy Cronin are others — who are out to “choose life”, transforming physics in the light of evolution.

We’re used to thinking that living things are the product of selection. Walker wants us to imagine that every object in the universe, whether living or not, is the product of selection. She wants us to think of the evolutionary history of things as a property, as fundamental to objects as charge and mass are to atoms.

Walker’s defence of her “assembly theory” is a virtuoso intellectual performance: she’s like the young Daniel Dennett, full of wit, mischief and bursts of insolent brevity which for newcomers to this territory are like oases in the desert.

But to drag this back to where we started: the search for extraterrestrial life — did you know that there isn’t enough stuff in the universe to make all the small molecules that could perfom a function in our biology? Even before life gets going, the chemistry from which it is built has to have been massively selected — and we know blind chance isn’t responsible, because we already know what undifferentiated masses of small organic molecules look like; we call this stuff tar.

In short, Walker shows us that what we call “life” is but an infinitesimal fraction of all the kinds of life which may arise out of any number of wholly unfamiliar chemistries.

“When we can run origin-of-life experiments at scale, they will allow us to predict how much variation we should expect in different geochemical environments,” Walker writes. So once again, we have to wait, even more piqued and anxious than before, to meet aliens even stranger than we have imagined or maybe can imagine.

Cabrol, in her own book, makes life even more excruciating for those of us who just want to shake hands with E.T.: imagine, she says, “a shadow biome” of living things so strange, they could be all around us here, on Earth — and we would never know.

Benignant?

Reading the Watermark by Sam Mills for the Times

“Every time I encounter someone,” celebrity novelist Augustus Fate reveals, near the start of Sam Mills’s new novel The Watermark, “ I feel a nagging urge to put them in one of my books.”

He speaks nothing less than the literal truth. Journalist and music-industry type Jaime Lancia and his almost-girlfriend, a suicidally inclined artist called Rachel Levy, have both succumbed to Fate’s drugged tea, and while their barely-alive bodies are wasting away in the attic of his Welsh cottage, their spirits are being consigned to a curious half-life as fictional characters. It takes a while for them to awake to their plight, trapped in Thomas Turridge, Fate’s unfinished (and probably unfinishable) Victorianate new novel. The malignant appetites of this paperback Prospero have swallowed rival novelists, too, making Thomas Turridge only the first of several ur-fictional rabbit holes down which Jaime and Rachel must tumble.

Over the not inconsiderable span of The Watermark, we find our star-crossed lovers evading asylum orders in Victorian Oxford, resisting the blandishments of a fictional present-day Manchester, surviving spiritual extinction in a pre-Soviet hell-hole evocatively dubbed “Carpathia”, and coming domestically unstuck in a care robot-infested near-future London.

Meta-fictions are having a moment. The other day I saw Bertrand Bonello’s new science fiction film The Beast, which has Léa Seydoux and George MacKay playing multiple versions of themselves in a tale that spans generations and which ends, yes, in a care-robot-infested future. Perhaps this coincidence is no more than a sign of the coming-to-maturity of a generation who (finally!) understand science fiction.

In 1957 Philip Dick wrote a short sweet novel called Eye in the Sky, which drove its cast through eight different subjective realities, each one “ruled” by a different character. While Mills’s The Watermark is no mere homage to that or any other book, it’s obvious she knows how to tap, here and there, into Dick’s madcap energy, in pursuit of her own game.

The Watermark is told variously from Jaime and Rachel’s point of view. In some worlds, Jaime wakes up to their plight and must disenchant Rachel. In other worlds, Rachel is the knower, Jaime the amnesiac. Being fictional characters as well as real-life kidnap victims, they must constantly be contending with the spurious backstories each fiction lumbers on them. These aren’t always easy to throw over. In one fiction, Jaime and Rachel have a son. Are they really going to abandon him, just so they can save their real lives?

Jaime, over the course of his many transmogrifications, is inclined to fight for his freedom. Rachel is inclined to bed down in fictional worlds that, while existentially unfree, are an improvement on real life — from which she’s already tried to escape by suicide.

The point of all this is to show how we hedge our lives around with stories, not because they are comforting (although they often are) but because stories are necessary: without them, we wouldn’t understand anything about ourselves or each other. Stories are thinking. By far the strongest fictional environment here is 1920s-era Carpathia. Here, a totalitarian regime grinds the star-crossed couple’s necessary fictions to dust, until at last they take psychic refuge in the bodies of wolves and birds.

The Watermark never quite coheres. It takes a conceit best suited to a 1950s-era science-fiction novelette (will our heroes make it back to the real world?), couples it to a psychological thriller (what’s up with Rachel?), and runs this curious algorithm through the fictive mill not once but five times, by which time the reader may well have had a surfeit of “variations on a theme”. Rightly, for a novel of this scope and ambition, Mills serves up a number of false endings on the way to her denouement, and the one that rings most psychologically true is also the most bathetic: “We were supposed to be having our grand love story, married and happy ever after,” Rachel observes, from the perspective of a fictional year 2049, “but we ended up like every other screwed-up middle-aged couple.”

It would be easy to write off The Watermark as a literary trifle. But I like trifle, and I especially appreciate how Mills’s protagonists treat their absurd bind with absolute seriousness. Farce on the outside, tragedy within: this book is full of horrid laughter.

But Mills is not a natural pasticheur, and unfortunately it’s in her opening story, set in Oxford in 1861, that her ventriloquism comes badly unstuck. A young woman “in possession of chestnut hair”? A vicar who “tugs at his ebullient mutton-chops, before resuming his impassioned tirade”? On page 49, the word “benignant”? This is less pastiche, more tin-eared tosh.

Against this serious failing, what defences can we muster? Quite a few. A pair of likeable protagonists who stand up surprisingly well to their repeated eviscerations. A plot that takes storytelling seriously, and would rather serve the reader’s appetites than sneer at them. Last but not least, some excellent incidental invention: to wit, a long-imprisoned writer’s idea of what the 1980s must look like (“They will drink too much ale and be in possession of magical machines”) and, elsewhere, a mother’s choice of bedtime reading material (“The Humanist Book of Classic Fairy Tales, retold by minor, marginalised characters”) .

But it’s as Kurt Vonnnegut said: “If you open a window and make love to the world, so to speak, your story will get pneumonia.” To put it less kindly: nothing kills the novel faster than aspiration. The Watermark, that wanted to be a very big book about everything, becomes, in the end, something else: a long, involved, self-alienating exploration of itself.