I’d sooner gamble

Speculation around the 2024 US election prompted this article for the Telegraph, about the dark arts of prediction

On July 21, the day Joe Biden stood down, I bet £50 that Gretchen Whitmer, the governor of Michigan, would end up winning the 2024 US presidential election. My wife remembers Whitmer from their student days, and reckons she’s a star in the making. My £50 would have earned me £2500 had she actually stood for president and won. But history makes fools of us all, and my bet bought me barely a day of that warm, Walter-Mittyish feeling that comes when you stake a claim in other people’s business.

The polls this election cycle indicated a tight race – underestimating Trump’s reach. But cast your mind back to 2016, when the professional pollster Nate Silver said Donald Trump stood a 29 per cent chance of winning the US presidency. The betting market, on the eve of that election, put Trump on an even lower 18 per cent chance. Gamblers eyed up the difference, took a punt, and did very well. And everyone else called Silver an idiot for not spotting Trump’s eventual win.

Their mistake was to think that Silver was a fortune-teller.

Divination is a 6,000-year-old practice that promises to sate our human hunger for certainty. On the other hand, gambling on future events – as the commercial operation we know today – began only a few hundred years ago in the casinos of Italy. Gambling promises nothing, and it only really works if you understand the mathematics.

The assumption that the world is inherently unpredictable – so that every action has an upside and a downside – got its first formal expression in Jacob Bernoulli’s 1713 treatise Ars Conjectandi (“The Art of Conjecturing”), and many of us still can’t wrap our heads around it. We’d sooner embrace certainties, however specious, than take risks, however measurable.
We’re risk-averse by nature, because the answer to the question “Well, what’s the worst that could happen?” has, over the course of evolution, been very bad indeed. You could fall. You could be bitten. You could have your arm ripped off. (Surprise a cat with a cucumber and it’ll jump out of its skin, because it’s still afraid of the snakes that stalked its ancestors.)

Thousands of years ago, you might have thrown dice to see who buys the next round, but you’d head to the Oracle to learn about events that could really change your life. A forthcoming exhibition at the Bodleian Library in Oxford, Oracles, Omens and Answers, takes a historical look at our attempts to divine the future. You might assume those Chinese oracle bones are curios from a distant and more innocent time – except that, turning a corner, you come across a book by Joan Quigley, who was in-house astrologer to US president Ronald Reagan. Our relationship to the future hasn’t changed very much, after all. (Nancy Reagan reached out to Quigley after a would-be assassin’s bullet tore through her husband’s lung. What crutch would I reach for, I wonder, at a moment like that?)

The problem with divination is that it doesn’t work. It’s patently falsifiable. But this wasn’t always the case. In a world radically simpler than our own, there are fewer things that can happen, and more likelihood of one of them happening in accordance with a prediction. This turned omens into powerful political weapons. No wonder, then, that in 11 AD, Augustus banned predictions pertaining to the date of someone’s death, while at the same time the Roman emperor made his own horoscope public. At a stroke, he turned astrology from an existential threat into a branch of his own PR machine.

The Bamoun state of western Cameroon had an even surer method for governing divination – in effect until the early 20th century. If you asked a diviner whether someone important would live or die, and the diviner said they’d live, but actually they died, then they’d put you, rather than the diviner, to death.

It used to be that you could throw a sheep’s shoulder blade on the flames and tell the future from the cracks that the fire made in the bone. Now that life is more complicated, anything but the most complicated forms of divination seems fatuous.

The daddy of them all is astrology: “the ancient world’s most ambitious applied mathematics problem”, according to the science historian Alexander Boxer. There’s a passage in Boxer’s book A Scheme of Heaven describing how a particularly fine observation, made by Hipparchus in 130 BC, depended on his going back over records that must have been many hundreds of years old. Astronomical diaries from the Assyrian library at Nineveh stretch from 652BC to 61BC, making them (as far as we know) the longest continuous research project ever undertaken.

You don’t go to that amount of effort pursuing claims that are clearly false. You do it in pursuit of cosmological regularities that, if you could only isolate them, would bring order and peace to your world. Today’s evangelists for artificial intelligence should take note of Boxer, who writes: “Those of us who are enthusiastic about the promise of numerical data to unlock the secrets of ourselves and our world would do well simply to acknowledge that others have come this way before.”

Astrology has proved adaptable. Classical astrology assumed that we lived in a deterministic world – one in which all events are causally decided by preceding events. You can trace the first cracks in this fixed view of the world all the way back to the medieval Christian church and its pesky insistence on free will (without which one cannot sin).

In spite of powerful Church opposition, astrology clung on in its old form until the Black Death, when its conspicuous failure to predict the death of a third of Europe called time on (nearly) everyone’s credulity. All of a sudden, and with what fleet footwork one can only imagine, horoscopists decided that your fate depended, not just upon your birth date, but also upon when you visited the horoscopist. This muddied the waters wonderfully, and made today’s playful, me-friendly astrologers – particularly popular on TikTok – possible.

***

The problem with trying to relate events to the movement of the planets is not that you won’t find any correlations. The problem is that there are correlations everywhere you look.
And these days, of course, we don’t even have to look: modern machine-learning algorithms are correlation monsters; they can make pretty much any signal correlate with any other. In their recent book AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor spend a good many pages dissecting the promise of predictive artificial intelligence (for instance, statistical software that claims to identify crimes before they have happened). If it fails, it will fail for exactly the same reasons astrology fails – because it’s churning through an ultimately meaningless data set. The authors conclude that immediate dangers from AI “largely stem from… our desperate and untutored keenness for prediction.”

The promise of such mechanical prediction is essentially astrological. We absolutely can use it to predict the future, but only if the world turns out, underneath all that roiling complexity, to be deterministic.

There are some areas in which our predictive powers have improved. The European Centre for Medium-Range Weather Forecasts opened in Reading in 1979. It was able to see three days into the future. Six years later, it could see five days ahead. In 2012 it could see eight days ahead and predicted Hurricane Sandy. By next year it expects to be able to predict high-impact events a fortnight before they happen.

Drunk on achievements in understanding atmospheric physics, some enthusiasts expect to predict human weather using much the same methods. They’re encouraged by numerical analyses that throw up glancing insights into corners of human behaviour. Purchasing trends can predict the ebb and flow of conflict because everyone rushes out to buy supplies in advance of the bombings. Trading algorithms predicted the post-Covid recovery of financial markets weeks before it happened.

Nonetheless, it is a classic error to mistake reality for the analogy you just used to describe it. Political weather is not remotely the same as weather. Still, the dream persists among statistics-savvy self-styled “superforecasters”, who regularly peddle ideas such as “mirror worlds” and “policy flight simulators”, to help us navigate the future of complex economic and social systems.

The danger with such prophecies is not that they are wrong; rather, the danger lies in the power to actually make them come true. Take election polling. Calling the election before it happens heartens leaders, disheartens laggards, and encourages everyone to alter their campaigns to address the anxieties and fears of the moment. Indeed, the easiest, most sure-fire way of predicting the future is to get an iron grip on the present – something the Soviets knew all too well. Then the future becomes, quite literally, what you make it.

There are other dangers, as we increasingly trust predictive technology with our lives. For instance, GPS uses a predictive algorithm in combination with satellite signals to plot our trajectory. And in December last year, a driver followed his satnav through Essex, down a little lane in Great Dunmow called Flitch Way, and straight into the River Chelmer.

We should not assume, just because the oracle is mechanical, that it’s infallible. There’s a story Isaac Asimov wrote in 1955 called Franchise, about a computer that, by chugging through the buzzing confusion of the world, can pinpoint the one individual whose galvanic skin response to random questions reveals which political candidate would be (and therefore is) the winner in any given election.

Because he wants to talk about correlation, computation, and big data, Asimov skates over the obvious point here – that a system like that can never know if it’s broken. And if that’s what certainty looks like, well, I’d sooner gamble.

You’re being chased. You’re being attacked. You’re falling. You’re drowning

To mark the centenary of Surrealism, this article in the Telegraph

A hundred years ago, a 28-year-old French poet, art collector and contrarian called André Breton published a manifesto that called time on reason.

Eight years before, in 1916, Breton was a medical trainee stationed at a neuro-psychiatric army clinic in Saint-Dizier. He cared for soldiers who were shell-shocked, psychotic, hysterical and worse, and fell in love with the mind, and the lengths it would go to survive the impossible present.

Breton’s Manifesto of Surrealism was, then, an inquiry into how, “under the pretense of civilization and progress, we have managed to banish from the mind everything that may rightly or wrongly be termed superstition, or fancy.”

For Breton, surrealism’s sincerest experiments involved a sort of “psychic automatism” – using the processes of dreaming to express “the actual functioning of thought… in the absence of any control exercised by reason, exempt from any aesthetic or moral concern.” He asked: “Can’t the dream also be used in solving the fundamental questions of life?”

Many strange pictures appeared over the following century, as Breton’s fellow surrealists answered his challenge, and plumbed the depths of the unconscious mind. Their efforts – part of a long history of humans’ attempts to document and decode the dream world – can be seen in a raft of new exhibitions marking surrealism’s centenary, from the hybrid beasts of Leonora Carrington (on view at the Hepworth Wakefield’s Forbidden Territories), to the astral fantasies of Remedios Varo (included in the Centre Pompidou’s blockbuster Surrealism show.)
Yet, just as often, such images illustrate the gap between the dreamer’s experience and their later interpretation of it. Some of the most popular surrealist pictures – Dalí’s melting clocks, say, or Magritte’s apple-headed businessman – are not remotely dreamlike. Looking at such easy-to-read canvases is like having a dream explained, and that’s not at all the same thing.
The chief characteristic of dreams is that they don’t surprise or shock or alienate the person who’s dreaming – the dreamer, on the contrary, feels that their dream is inevitable. “The mind of the man who dreams,” Breton writes, “is fully satisfied by what happens to him. The agonizing question of possibility is no longer pertinent. Kill, fly faster, love to your heart’s content… Let yourself be carried along, events will not tolerate your interference. You are nameless. The ease of everything is priceless.”

Most physiologists and psychologists of the early 20th century would have agreed with him, right up until his last sentence. While the surrealists looked to dreams to reveal a mind beyond conciousness, scientists of the day considered them insignificant, because you can’t experiment on a dreamer, and you can’t repeat a dream.

Since then, others have joined the battle over the meaning – or lack of meaning – of our dreams. In 1977, Harvard psychiatrists John Allan Hobson and Robert McCarley proposed “random activation theory” ‘activation-synthesis theory’, in a rebuff to the psychoanalysts and their claim that dreams had meanings only accessible via (surprise, surprise) psychonalysis. Less an explanation, more an expression of exasperation, their theory held that certain parts of our brains concoct crazy fictions out of the random neural firings of the sleeping pons (a part of the brainstem).

It is not a bad theory. It might go some way to explaining the kind of hypnagogic imagery we experience when we doze, and that so delighted the surrealists. It might even bring us closer to actually reconstructing our dreams. For instance, we can capture the brain activity of a sleeper, using functional magnetic resonance imaging, hand that data to artificial intelligence software that’s been trained on about a million images, and the system will take a stab at what the dreamer is seeing in their dream. The Japanese neuroscientist Yukiyasu Kamitani made quite a name for himself when he tried this in 2012.

Six years later, at the Serpentine Gallery in London, artist Pierre Huyghe integrated some of this material into his show UUmwelt — and what an astonishing show it was, its wall screens full of bottles becoming elephants becoming screaming pigs becoming geese, skyscrapers, mixer taps, dogs, moles, bat’s wings…

But modelling an idea doesn’t make it true. Activation-synthesis theory has inspired some fantastic art, but it fails to explain one of the most important physiological characteristics of dreaming – the fact that dreams paralyse the dreamer.

***

Brains have an alarming tendency to treat dreams as absolutely real and to respond appropriately — to jump and punch when the dream says jump! and punch! Dreams, for the dreamer, can be very dangerous indeed.

The simplest evolutionary way to mitigate the risk of injury would have been to stop the dreamer from dreaming. Instead, we evolved a complex mechanism to paralyse ourselves while in the throes of our night-time adventures. 520 million years of brain evolution say that dreams are important and need protecting.

This, rather than the actual content of dreams, has driven research into the sleeping brain. We know now that dreaming involves many more brain areas, including the parietal lobes (involved in the representation of space) and the frontal lobes (responsible for decision-making, problem-solving, self-control, attention, speech production, language comprehension – oh, and working memory). Mice dream. Dogs dream. Platypuses, beluga whales and ostriches dream; so do penguins, chameleons, iguanas and cuttlefish.[

We’re not sure about turtles. Octopuses? Marine biologist David Scheel caught his snoozing pet octopus Heidi on camera, and through careful interpretation of her dramatic colour-shifts he came to the ingenious conclusion that she was enjoying an imaginary crab supper. The clip, from PBS’s 2019 documentary Octopus: Making Contact is on YouTube.

Heidi’s brain structure is nothing like our own. Still, we’re both dreamers. Studies of wildly different sleeping brains throw up startling convergences. Dreaming is just something that brains of all sorts have to do.

We’ve recently learned why.

The first clues emerged from sleep deprivation studies conducted in the late 1960s. Both Allan Rechtschaffen and William Dement showed that sleep deprivation leads to memory deficits in rodents. A generation later, and researchers including the Brazilian neuroscientist Sidarta Ribeiro were spending the 1990s unpicking the genetic basis of memory function. Ribiero himself found the first molecular evidence of Freud’s “day residue” hypothesis, which has it that the content of our dreams is often influenced by the events, thoughts, and feelings we experience during the day.

Ribeiro had his own fairly shocking first-hand experience of the utility of dreaming. In February 1995 he arrived in New York to start at doctorate at Rockefeller University. Shortly after arriving, he woke up unable to speak English. He fell in and out of a narcoleptic trance, and then, in April, woke refreshed and energised and able to speak English better than ever before. His work can’t absolutely confirm that his dreams saved him, but he and other researchers have most certainly established the link between dreams and memory. To cut a long story very short indeed: dreams are what memories get up to when there’s no waking self to arrange them.

Well, conscious thought alone is not fast enough or reliable enough to keep us safe in the blooming, buzzing confusion of the world. We also need fast, intuitive responses to critical situations, and we rehearse these responses, continually, when we dream. Collect dream narratives from around the world, and you will quickly discover (as literary scholar Jonathan Gottschall points out in his 2012 book The Storytelling Animal) that the commonest dreams have everything to do with life and death and have very little time for anything else. You’re being chased. You’re being attacked. You’re falling. You’re drowning. You’re lost, trapped, naked, hurt…

When lives were socially simple and threats immediate, the relevance of dreams was not just apparent; it was impelling. And let’s face it: a stopped clock is right at least twice a day. Living in a relatively simple social structure, afforded only a limited palette of dream materials to draw from, was it really so surprising that (according to the historian Suetonius) Rome’s first emperor Augustus found his rise to power predicted by dreams?

Even now, Malaysia’s indigenous Orang Asli people believe that by sharing their dreams, they are passing on healing communications from their ancestors. Recently the British artist Adam Chodzko used their practice as the foundation for a now web-based project called Dreamshare Seer, which uses generative AI to visualise and animate people’s descriptions of their dreams. (Predictably, his AI outputs are rather Dali-like.)

But humanity’s mission to interpret dreams has been eroded by a revolution in our style of living. Our great-grandparents could remember a world without artificial light. Now we play on our phones until bedtime, then get up early, already focused on a day that is, when push comes to shove, more or less identical to yesterday. We neither plan our days before we sleep, nor do we interrogate our dreams when we wake. Is it any wonder, then, that our dreams are no longer able to inspire us?

Growing social complexity enriches our dream lives, but it also fragments them. Last night I dreamt of selecting desserts from a wedding buffet; later I cuddled a white chicken while negotiating for a plumbing contract. Dreams evolved to help us negotiate the big stuff. Having conquered the big stuff (humans have been apex predators for around 2 million years), it is possible that we have evolved past the point where dreaming is useful, but not past the point where dreaming is dangerous.

Here’s a film you won’t have seen. Petrov’s Flu, directed by Kirill Serebrennikov, was due for limited UK release in 2022, even as Vladimir Putin’s forces were bumbling towards Kiev.

The film opens on our hero Petrov (Semyon Serzin), riding a trolleybus home across a snowbound Yekaterinburg. He overhears a fellow passenger muttering to a neighbour that the rich in this town all deserve to be shot.

Seconds later the bus stops, Petrov is pulled off the bus and a rifle is pressed into his hands. Street executions follow, shocking him out of his febrile doze…

And Petrov’s back on the bus again.

Whatever the director’s intentions were here, I reckon this is a document for our times. You see, Andre Breton wrote his manifesto in the wreckage of a world that had turned its machine tools into weapons, the better to slaughter itself — and did all this under the flag of the Enlightenment and reason.

Today we’re manufacturing new kinds of machine tools, to serve a world that’s much more psychologically adept. Our digital devices, for example, exploit our capacity for focused attention (all too well, in many cases).

So what of those devices that exist to make our sleeping lives better, sounder, and more enjoyable?

SleepScore Labs is using electroencephalography data to analyse the content of dreams. BrainCo has a headband interface that influences dreams through auditory and visual cues. Researchers at MIT have used a sleep-tracking glove called Dormio to much the same end. iWinks’s headband increases the likelihood of lucid dreaming.

It’s hard to imagine light installations, ambient music and scented pillows ever being turned against us. Then again, we remember the world the Surrealists grew up in, laid waste by a war that had turned its ploughshares into swords. Is it so very outlandish to suggest that tomorrow, we will be weaponising our dreams?

What about vice?

Reading Rat City by Jon Adams & Edmund Ramsden and Dr. Calhoun’s Mousery by Lee Alan Dugatkin for the Spectator

The peculiar career of John Bumpass Calhoun (1917-1995, psychologist, philosopher, economist, mathematician, sociologist, nominated for the Nobel Peace Prize and subject of a glowing article in Good Housekeeping) comes accompanied with more than its fair share of red flags.

Calhoun studied how rodents adapted to different environments; and more specificallly, how the density of a population effects an individual’s behaviour.

He collected reams of data, but published little, and rarely in mainstream scientific journals. He courted publicity, inviting journalists to draw, from his studies of rats and mice, apocalyptic conclusions about the future of urban humanity.

Calhoun wasn’t a “maverick” scientist (not an egoist, not a loner, not a shouter-at-clouds). Better to say that he was, well, odd. He had a knack for asking the counter-intuitive question, an eye for the unanticipated result. Charged in 1946 with a reducing the rat population of Baltimore, he wondered what would happen to a community if he added more rats. So he did — and rodent numbers fell to 60 per cent of their original level. Who would have guessed?

The general assumption about population, lifted mostly from the 18th-century economist Thomas Malthus, is that species expand to consume whatever resources are available to them, then die off once they exceed the environment’s carrying capacity.

But Malthus himself knew that wasn’t the whole story. He said that there were two checks on population growth: misery and vice. Misery, in its various forms (predation, disease, famine…) has been well studied. But what, Calhoun asked, in a 1962 Scientific American article, of vice? In less loaded language: “what are the effects of the social behaviour of a species on population growth — and of population density on social behaviour?”

Among rodents, a rising population induces stress, and stress reduces the birth-rate. Push the overcrowding too far, though (further than would be likely to happen in nature), and stress starts to trigger all manner of weird and frightening effects. The rodents start to pack together, abandoning all sense of personal space. Violence and homosexuality skyrocket. Females cease to nurture and suckle their young; abandoned, these offspring become food for any passing male. The only way out of this hell is complete voluntary isolation. A generation of “beautiful ones” arises, that knows only to groom itself and avoid social contact. Without sex, the population collapses. The few Methusalehs who remain have no social skills to speak of. They’re not aggressive. They’re not anything. They barely exist.

What do you do with findings like that? Calhoun hardly needed to promote his work; the press came flocking to him. Der Spiegel. Johnny Carson. He achieved his greatest notoriety months before he shared the results of his most devastating experiment. The mice in an enclosure dubbed “Universe 25” were never allowed to get sick or run out of food. Once they reached a certain density, vice wiped them out.

Only publishing, a manufacturing industry run by arts graduates, could contrive to drop two excellent books about Dr Calhoun’s life and work into the same publishing cycle. No one but a reviewer or an obsessive is likely to find room for both on their autumn reading pile.

Historians Edmund Ramsden and Jon Adams have written the better book. Rat City puts Calhoun’s work in a rich historical and political context. Calhoun took a lot of flak for his glib anthropomorphic terminology: he once told a reporter from Japan’s oldest newspaper, Mainichi Shimbun, that the last rats of Universe 25 “represent the human being on the limited space called the earth.” But whether we behave exactly like rats in conditions of overcrowding and/or social isolation is not the point.

The point is that, given the sheer commonality between mammal species, something might happen to humans in like conditions, and it behoves us to find out what that something might be, before we foist any more hopeful urban planning on the prolitariat. Calhoun, who got us to think seriously about how we design our cities, is Rat City’s visionary hero, to the point where I started to hear him. For instance, observing some gormless waifs, staring into their smartphones at the bottom of the escalator, I recalled his prediction that “we might one day see the human equivalent” of his mice, pathologically crammed together “in a sort of withdrawal — in which they would behave as if they were not aware of each other.”

Dr Calhoun’s Mousery is the simpler book of the two and, as Lee Dugatkin cheerfully concedes, it owes something to Adams and Ramsden’s years of prolific research. I prefer it. Its narrative is more straightforward, and Dugatkin gives greater weight to Calhoun’s later career.

The divided mouse communities of Universe 34, Calhoun’s last great experiment, had to learn to collaborate to obtain all their resources. As their culture of collaboration developed, their birth rate stabilised, and individuals grew healthier and lived longer.

So here’s a question worthy of good doctor: did culture evolve the shield us from vice?

A robust sausage sandwich

Reading Alan Moore’s The Great When for the Telegraph

Londoners! There is another city behind your city, or above it, or within it. This other place, known as Long London, belongs to the Great When, a super-real realm that lurks behind your common-or-garden reality. Whenever there are shenanigans over there – caused by Jungian archetypes called Arcana, who jockey for esoteric advantage – it stirs mundane events over here. An artist-magician called Austin Spare puts it this way: “If this London is what they call the Smoke, then that place is the Fire, you follow me?” A bruiser called Jack “Spot” Comer is more forthright: “This other London, that’s the organ grinder, an’ our London’s just the fackin’ monkey.”

Sometimes one of these Arcana even stumbles from Long London into the real. In 1936, the Beauty of Riots – think Eugène Delacroix’s bare-breasted Liberty Leading the People, only 10 feet tall – finds herself picking through the battle of Cable Street. Sometimes a visit is arranged across the divide. In 1949, Harry Lud, “the red-handed soul of crime itself”, comes at Spot’s beckoning to sort out some bother with the gangster Billy Hill.

If Spare, Spot and the rest – in fact, everything I’ve written above – are unfamiliar to you, look the names up. Alan Moore obsessives, of whom there are many, will be used to how his wildest yarns emerge from the mouths of London’s more colourful historical figures. Yes, this is the oldest, cheapest trick in the arsenal of the London novelist – but, as The Great When proves, Moore keeps getting away with it.

The plot: a few years after the Second World War, underachieving Dennis Knuckleyard has been effectively adopted by Coffin Ada, the nastiest second-hand book-dealer in Shoreditch. Having accidentally purchased a book that doesn’t exist – at least, not in this realm – Dennis finds himself caught between worlds. Can he return the book to Long London and its severed, vitrine-dwelling City Heads, and maintain the wall between the realms? Will his adventures make a man of him? Will he win the favours of tart-with-a-heart Grace Shilling?

Moore’s not doing anything new here. Readers of urban fantasy – and I’m among them – have fallen or forced our way along so many Diagon Alleys over the years, have waited so very long for that bus to Viriconium, or Neverwhere, or Un Lun Dun, that it’s a wonder we have an appetite for (ahem) more. In this, the first of a promised series, what does Moore bring to what is by now a familiar itinerary? More Moore for a start, which is to say characters as if by Dickens, and set dressing as if by Iain Sinclair. It’s a heady brew. His esoteric Long London, when Dennis runs into it at last, is rendered in language so unhinged it teeters on the unreadable, so sure is Moore that the reader will be hooked. His maximalist prose isn’t for everyone. Will you join Dennis as he “glories in a robust sausage sandwich – slabs of fresh bread, soft and baked to flaking umber at the edges, soaking up the hot grease of the bangers”? I did not, though I admit that the “glossy chunks” of chip-shop cod “that slid apart like pages in a poorly stapled magazine” were nicely done.

More impressive is the control Moore has over his architecture. His straightforward story harbours a surprising degree of pathos: when the plot around the misplaced book resolves unexpectedly, halfway through, it leaves Dennis with a couple of hundred pages in which to make ordinary human mistakes, chiefly mistaking friends for enemies, and friendliness for sexual attraction. So our uncomplicated young hero grows up as kinked and keeled-over as the rest of us, while the war, long over, continues to unhinge the world, financing its criminal class and normalising its violence.

You can catch a glimpse of where Moore’s new series is going. His London is too complicated to explain simply. Simple reasons have become a thing of the past, “and we shan’t be seeing ’em again.” Magical thinking is not just possible for Moore’s characters; it’s reasonable. It’s inconceivable that the writer of the 1990s comic sensation From Hell won’t find himself wandering through Victorian Whitechapel at some point. Still, I would like to think that The Great When is edging away from Ripper territory into a wider and more generous vision of what London was, is, and may become.

What about the unknown knowns?

Reading Nate Silver’s On the Edge and The Art of Uncertainty by David Spiegelhalter for the Spectator

The Italian actuary Bruno de Finetti, writing in 1931, was explicit: “Probability,” he wrote, “does not exist.”

Probability, it’s true, is simply the measure of an observer’s uncertainty, and in The Art of Uncertainty British statistician David Spiegelhalter explains how his extraordinary and much-derided science has evolved to the point where it is even able to say useful things about why things have turned out the way they have, based purely on present evidence. Spiegelhalter was a member of the Statistical Expert Group of the 2018 UK Infected Blood Inquiry, and you know his book’s a winner the moment he tells you that between 650 and 3,320 people nationwide died from tainted transfusions. By this late point, along with the pity and the horror, you have a pretty good sense of the labour and ingenuity that went into those peculiarly specific, peculiarly wide-spread numbers.

At the heart of Spiegelhalter’s maze, of course, squats Donald Rumsfeld, once pilloried for his convoluted syntax at a 2002 Department of Defense news briefing, and now immortalised for what came out of it: the best ever description of what it’s like to act under conditions of uncertainty. Rumsfeld’s “unknown unknowns” weren’t the last word, however; Slovenian philosopher Slavoj Žižek (it had to be Žižek) pointed out that there are also “unknown knowns” — “all the unconscious beliefs and prejudices that determine how we perceive reality.”

In statistics, something called Cromwell’s Rule cautions us never to bed absolute certainties (probabilities of 0 or 1) into our models. Still, “unknown knowns” fly easily under the radar, usually in the form of natural language. Spiegelhalter tells how, in 1961, John F Kennedy authorised the invasion of the Bay of Pigs, quite unaware of the minatory statistics underpinning the phrase “fair chance” in an intelligence briefing.

From this, you could draw a questionable moral: that the more we quantify the world, the better our decisions will be. Nate Silver — poker player, political pundit and author of 2012’s The Signal and the Noise — finds much to value in this idea. On the Edge, though, is more about the unforeseen consequences that follow.

There is a sprawling social ecosystem out there that Silver dubs “the River”, which includes “everyone from low-stakes poker pros just trying to grind out a living to crypto kings and adventure-capital billionaires.” On the Edge is, among many other things, a cracking piece of popular anthropology.

Riverians accept that it is very hard to be certain about anything; they abandon certainty for games of chance; and they end up treating everything as a market to be played.

Remember those chippy, cheeky chancers immortalised in films like 21 (2008: MIT’s Blackjack Team takes on Las Vegas) and Moneyball (2011: a young economist up-ends baseball)?

More than a decade has passed, and they’re not buccaneers any more. Today, says Silver, “the Riverian mindset is coming from inside the house.”

You don’t need to be a David Spiegelhalter to be a Riverian. All you need is the willingness to take bets on very long odds.

Professional gamblers learn when and how how to do this, and this is why that subset of gamblers called Silicon Valley venture capitalists are willing to back wilful contrarians like Elon Musk (on a good day) and (on a bad day) Ponzi-scheming crypto-crooks like Sam Bankman-Fried.

Success as a Riverian isn’t guaranteed. As Silver points out, “a lot of the people who play poker for a living would be better off — at least financially — doing something else.” Then again, those who make it in the VC game expect to double their money every four years. And those who find they’ve backed a Google or a SpaceX can find themselves living in a very odd world indeed.

Recently the billionaire set has been taking an interest and investing in “effective altruism”, a hyper-utilitarian dish cooked up by Oxford philosopher Will MacAskill. “EA” promises to multiply the effectiveness of acts of charity by studying their long-term effectiveness — a approach that naturally appeals to minds focused on quantification. Silver describes the state of the current movement, “stuck in the uncanny valley between being abstractly principled and ruthlessly pragmatic, with the sense that you can kind of make it up as you go along”. Who here didn’t see that one coming? Most of the original EA set now spend their time agonising over the apocalyptic potential of artificial intelligence.

The trick to Riverian thinking is to decouple things, in order to measure their value. Rather than say, “The Chick-fil-A CEO’s views on gay marriage have put me off my lunch,” you say, “the CEO’s views are off-putting, but this is a damn fine sandwich — I’ll invest.”

That such pragmatism might occasionally ding your reputation, we’ll take as read. But what happens when you do the opposite, glomming context after context onto every phenomenon in pursuit of some higher truth? Soon everything becomes morally equivalent to everything else and thinking becomes impossible.

Silver mentions a December 2023 congressional hearing in which the tone-deaf presidents of Harvard, Penn and MIT, in their sophomoric efforts to be right about all things all at once all the time, managed to argue their way into anti-Semitism. (It’s on YouTube if you haven’t seen it already. The only thing I can compare it to is how The Fast Show’s unlucky Alf used to totter invariably toward the street’s only open manhole.) No wonder that the left-leaning, non-Riverian establishment in politics and education is becoming, in Silver’s words, “a small island threatened by a rising tide of disapproval.”

We’d be foolish in the extreme to throw in our lot with the Riverians, though: people whose economic model reduces to: Bet long odds on the hobby-horses of contrarian asshats and never mind what gets broken in the process.

If we want a fairer, more equally apportioned world, these books should convince us that we should be spending less time worrying about what people are thinking, and concern ourselves more with how people are thinking.

We cannot afford to be ridden by unknown knowns.

 

“Fears about technology are fears about capitalism”

Reading How AI Will Change Your Life by Patrick Dixon and AI Snake Oil by Arvind Narayanan and Sayash Kapoor, for the Telegraph

According to Patrick Dixon, Arvind Narayanan and Sayash Kapoor, artificial intelligence will not bring about the end of the world. It isn’t even going to bring about the end of human civilisation. It’ll struggle even to take over our jobs. (If anything, signs point to a decrease in unemployment.)

Am I alone in feeling cheated here? In 2014, Stephen Hawking said we were doomed. A decade later, Elon Musk is saying much the same. Last year, Musk and other CEOs and scientists signed an open letter from the Future of Life Institute, demanding a pause on giant AI experiments.

But why listen to fiery warnings from the tech industry? Of 5,400 large IT projects (for instance, creating a large data warehouse for a bank) recorded by 2012 in a rolling database maintained by McKinsey, nearly half went over budget, and over half under-delivered. In How AI Will Change Your Life, author and business consultant Dixon remarks, “Such consistent failures on such a massive scale would never be tolerated in any other area of business.” Narayanan and Kapoor, both computer scientists, say that academics in this field are no better. “We probably shouldn’t care too much about what AI experts think about artificial general intelligence,” they write. “AI researchers have often spectacularly underestimated the difficulty of achieving AI milestones.”

These two very different books want you to see AI from inside the business. Dixon gives us plenty to think about: AI’s role in surveillance; AI’s role in intellectual freedom and copyright; AI’s role in warfare; AI’s role in human obsolescence – his exhaustive list runs to over two dozen chapters. Each of these debates matter, but we would be wrong to think that they are driven by, or were even about, technology at all. Again and again, they are issues of money: about how production gravitates towards automation to save labour costs; or about how AI tools are more often than not used to achieve imaginary efficiencies at the expense of the poor and the vulnerable. Why go to the trouble of policing poor neighbourhoods if the AI can simply round up the usual suspects? As the science-fiction writer Ted Chiang summed up in June 2023, “Fears about technology are fears about capitalism.”

As both books explain, there are three main flavours of artificial intelligence. Large language models power chatbots, of which GPT-4, Gemini and the like will be most familiar to readers. They are bullshitters, in the sense that they’re trained to produce plausible text, not accurate information, and so fall under philosopher Harry Frankfurt’s definition of bullshit as speech that is intended to persuade without regard for the truth. At the moment they work quite well, but wait a year or two: as the internet fills with AI-generated content, chatbots and their ilk will begin to regurgitate their own pabulum, and the human-facing internet will decouple from truth entirely.

Second, there are AI systems whose superior pattern-matching spots otherwise invisible correlations in large datasets. This ability is handy, going on miraculous, if you’re tackling significant, human problems. According to Dixon, for example, Klick Labs in Canada has developed a test that can diagnose Type 2 diabetes with over 85 per cent accuracy using just a few seconds of the patient’s voice. Such systems have proved less helpful, however, in Chicago. Narayanan and Kapoor report how, lured by promises of instant alerts to gun violence, the city poured nearly 49 million dollars into ShotSpotter, a system that has been questioned for its effectiveness after police fatally shot a 13-year-old boy in 2021.

Last of the three types is predictive AI: the least discussed, least successful, and – in the hands of the authors of AI Snake Oil (4 STARS) – by some way the most interesting. So far, we’ve encountered problems with AI’s proper working that are fixable, at least in principle. With bigger, better datasets – this is the promise – we can train AI to do better. Predictive AI systems are different. These are the ones that promise to find you the best new hires, flag students for dismissal before they start to flounder, and identify criminals before they commit criminal acts.

They won’t, however, because they can’t. Drawing broad conclusions about general populations is often the stuff of social science, and social science datasets tend to be small. But were you to have a big dataset about a group of people, would AI’s ability to say things about the group let it predict the behaviour of one of its individuals? The short answer is no. Individuals are chaotic in the same way as earthquakes are. It doesn’t matter how much you know about earthquakes; the one thing you’ll never know is where and when the next one will hit.

How AI Will Change Your Life is not so much a book as a digest of bullet points for a PowerPoint presentation. Business types will enjoy Dixon’s meticulous lists and his willingness to argue both sides against the middle. If you need to acquire instant AI mastery in time for your next board meeting, Dixon’s your man. Being a dilettante, I will stick with Narayanan and Kapoor, if only for this one-liner, which neatly captures our confused enthusiasm for little black boxes that promise the world. “It is,” they say, “as if everyone in the world has been given the equivalent of a free buzzsaw.”

 

 

Not even wrong

Reading Yuval Noah Harari’s Nexus for the Telegraph

In his memoirs, the German-British physicist Rudolf Peierls recalls the sighing response his colleague Wolfgang Pauli once gave to a scientific paper: “It is not even wrong.”

Some ideas are so incomplete, or so vague, that they can’t even be judged. Yuval Noah Harari’s books are notoriously full of such ideas. But then, given what Harari is trying to do, this may not matter very much.

Take this latest offering: a “brief history” that still finds room for viruses and Neanderthals, The Talmud and Elon Musk’s Neuralink and the Thirty Years’ War. Has Harari found a single rubric, under which to combine all human wisdom and not a little of its folly? Many a pub bore has entertained the same conceit. And Harari is tireless: “To appreciate the political ramifications of the mind–body problem,” Harari writes, “let’s briefly revisit the history of Christianity.” Harari is a writer who’s never off-topic but only because his topic is everything.

Root your criticism of Harari in this, and you’ve missed the point, which is that he’s writing this way on purpose. His single goal is to give you a taste of the links between things, without worrying too much about the things themselves. Any reader old enough to remember James Burke’s idiosyncratic BBC series Connections will recognise the formula, and know how much sheer joy and exhilaration it can bring to an audience that isn’t otherwise spending every waking hour grazing the “smart thinking” shelf at Waterstone’s.

Well-read people don’t need Harari.

Nexus’s argument goes like this: civilisations are (among other things) information networks. Totalitarian states centralise their information, which grows stale as a consequence. Democracies distribute their information, with checks and balances to keep the information fresh.

Harari’s key point here is that in neither case does the information have to be true. A great deal of it is not true. At best it’s intersubjectively true (Santa Claus, human rights and money are real by consensus: they have no basis in the material world.) Quite a lot of our information is fiction, and a fraction of that fiction is downright malicious falsehood.

It doesn’t matter to the network, which uses that information more or less agnostically, to establish order. Nor is this necessarily a problem, since an order based on truth is likely to be a lot more resilient and pleasant to live under than an order based on cultish blather.

This typology gives Harari the chance to wax lyrical over various social and cultural arrangements, historical and contemporary. Marxism and populism both get short shrift, in passages that are memorable, pithy, and, dare I say it, wise.

In the second half of the book, Harari invites us to stare like rabbits into the lights of the on-coming AI juggernaut. Artificial intelligence changes everything, Harari says, because just as human’s create inter-subjective realities, computers create inter-computer realities. Pokemon Go is an example of an intercomputer reality. So — rather more concerningly — are the money markets.

Humans disagree with each other all the time, and we’ve had millennia to practice thinking our way into other heads. The problem is that computers don’t have any heads. Their intelligence is quite unlike our own. We don’t know what They’re thinking because, by any reasonable measure, “thinking” does not describe what They are doing.

Even this might not be a problem, if only They would stop pretending to be human. Harari cites a 2022 study showing that the 5 per cent of Twitter users that are bots are generating between 20 and 30 per cent of the site’s content.

Harari quotes Daniel Dennett’s blindingly obvious point that, in a society where information is the new currency, we should ban fake humans the way we once banned fake coins.

And that is that, aside from the shouting — and there’s a fair bit of that in the last pages, futurology being a sinecure for people who are not even wrong.

Harari’s iconoclastic intellectual reputation is wholly undeserved, not because he does a bad job, but because he does such a superb job of being the opposite of an iconoclast. Harari sticks the world together in a gleaming shape that inspires and excites. If it holds only for as long as it takes to read the book, still, dazzled readers should feel themselves well served.

Just which bits of the world feel human to you?

Reading Animals, Robots, Gods by Webb Keane for New Scientist

No society we know of ever lived without morals. Roughly the same ethical ideas arise, again and again, in the most diverse societies. Where do these ideas of right and wrong come from? Might there be one ideal way to live?

Michigan-based anthropologist Webb Keane argues that morality does not arise from universal principles, but from the human imagination. Moral ideas are sparked in the friction between objectivity, when we think about the world as if it were a story, and subjectivity, in which we’re in some sort of conversation with the world.

A classic trolley problem eludicates Keane’s point. If you saw an out-of-control trolley (tram car) hurtling towards five people, and could pull a switch that sent the trolley down a different track, killing only one innocent bystander, you would more than likely choose to pull the lever. If, on the other hand, you could save five people by pushing an innocent bystander into the path of the trolley (using him, in Keane’s delicious phrase, “as an ad hoc trolley brake”), you’d more than likely choose not to interfere. The difference in your reaction turns on whether you are looking at the situation objectively, at some mechanical remove, or whether you subjectively imagine yourself in the thick of the action.

What moral attitude we adopt to situations depends on how socially charged we think they are. I’d happily kick a stone down the road; I’d never kick a dog. Where, though, are the boundaries of this social world? If you can have a social relationship with your pet dog, can you have one with your decorticate child? Your cancer tumour? Your god?

Keane says that it’s only by asking such questions that we acquire morals in the first place. And we are constantly trying to tell the difference between the social and the non-social, testing connections and experimenting with boundaries, because the question “just what is a human being, anyway?” lies at the heart of all morality.

Readers of Animals, Robots, Gods will encounter a wide range of non-humans, from sacrificial horses to chatbots, with whom they might conceivably establish a social relationship. Frankly, it’s too much content for so short a book. Readers interested in the ethics of artifical intelligence, for instance, won’t find much new insight here. On the other hand, I found Keane’s distillation of fieldwork into the ethics of hunting and animal sacrifice both gripping and provoking.

We also meet humans enhanced and maintained by technology. Keane reports a study by anthropologist Cheryl Mattingly in which devout Los Angles-based Christians Andrew and Darlene refuse to turn off the machines keeping their brain-dead daughter alive. The doctors believe that, in the effort to save her, their science has at last cyborgised the girl to the point at which she is no longer a person. The parents believe that, medically maintained or not, cognate or not, their child’s being alive is significant, and sufficient to make her a person. This is hardly some simplistic “battle between religion and science”. Rather, it’s an argument about where we set the boundaries within which we apply moral imperatives like the one telling us not to kill.

Morals don’t just guide lived experience: they arise from lived experience. There can be no trolley problems without trolleys. This, Keane argues, is why morality and ethics are best approached from an anthropological perspective. “We cannot make sense of ethics, or expect them of others, without understanding what makes them inhabitable, possible ways to live,” he writes. “And we should neither expect, nor, I think, hope that the diversity of ways of life will somehow converge onto one ‘best’ way of living.”

We communicate best with strangers when we accept them as moral beings. A western animal rights activist would never hunt an animal. A Chewong hunter from Malaysia wouldn’t dream of laughing at one. And if these strangers really want to get the measure of each other, they should each ask the same, devastatingly simple question:

Just which bits of the world feel human to you?

Life trying to understand itself

Reading Life As No One Knows It: The Physics of Life’s Emergence by Sara Imari Walker and  The Secret Life of the Universe by Nathalie A Cabrol, for the Telegraph

How likely is it that we’re not alone in the universe? The idea goes in and out of fashion. In 1600 the philosopher Giordano Bruno was burned at the stake for this and other heterdox beliefs. Exactly 300 years later the French Académie des sciences announced a prize for establishing communication with life anywhere but on Earth or Mars — since people already assumed that Martians did exist.

The problem — and it’s the speck of grit around which these two wildly different books accrete — is that we’re the only life we know of. “We are both the observer and the observation,” says Nathalie Cabrol, chief scientist at the SETI Institute in California and author of The Secret Life of the Universe, already a bestseller in her native France: “we are life trying to understand itself and its origin.”

Cabrol reckons this may be only a temporary problem, and there are two strings to her optimistic argument.

First, the universe seems a lot more amenable toward life than it used to. Not long ago, and well within living memory, we didn’t know whether stars other than our sun had planets of their own, never mind planets capable of sustaining life. The Kepler Space Telescope, launched in March 2009, changed all that. Among the wonders we’ve detected since — planets where it rains molten iron, or molten glass, or diamonds, or metals, or liquid rubies or sapphires — are a number of rocky planets, sitting in the habitable zones of their stars, and quite capable of hosting oceans on their surface. Well over half of all sun-like stars boast such planets. We haven’t even begun to quantify the possibility of life around other kinds of star. Unassuming, plentiful and very long-lived M-dwarf stars might be even more life-friendly.

Then there are the ice-covered oceans of Jupiter’s moon Europa, and Saturn’s moon Enceladus, and the hydrocarbon lakes and oceans of Saturn’s Titan, and Pluto’s suggestive ice volcanoes, and — well, read Cabrol if you want a vivid, fiercely intelligent tour of what may turn out to be our teeming, life-filled solar system.

The second string to Cabrol’s argument is less obvious, but more winning. We talk about life on Earth as if it’s a single family of things, with one point of origin. But it isn’t. Cabrol has spent her career hunting down extremophiles (ask her about volcano diving in the Andes) and has found life “everywhere we looked, from the highest mountain to the deepest abyss, in the most acidic or basic environments, the hottest and coldest regions, in places devoid of oxygen, within rocks — sometimes under kilometers of them — within salts, in arid deserts, exposed to radiation or under pressure”.

Several of these extremophiles would have no problem colonising Mars, and it’s quite possible that a more-Earth-like Mars once seeded Earth with life.

Our hunt for earth-like life — “life like ours” — always had a nasty circularity about it. By searching for an exact mirror of ourselves, what other possibilities were we missing? In The Secret Life Cabrol argues that we now know enough about life to hunt for radically strange lifeforms, in wildly exotic environments.

Sara Imari Walker agrees. In Life As No One Knows It, the American theoretical physicist does more than ask how strange life may get; she wonders whether we have any handle at all on what life actually is. All these words of ours — living, lifelike, animate, inanimate, — may turn out to be hopelessly parochial as we attempt to conceptualise the possibilities for complexity and purpose in the universe. (Cabrol makes a similar point: “Defining Life by describing it,” she fears, “as the same as saying that we can define the atmosphere by describing a bird flying in the sky.”

Walker, a physicist, is painfully aware that among the phenomena that current physics can’t explain are physicists — and, indeed, life in general. (Physics, which purports to uncover an underlying order to reality, is really a sort of hyper-intellectual game of whack-a-mole in which, to explain one phenomenon, you quite often have to abandon your old understanding of another.) Life processes don’t contradict physics. But physics can’t explain them, either. It can’t distinguish between, say, a hurricane and the city of New York, seeing both as examples of “states of organisation maintained far from equilibrium”.

But if physics can’t see the difference, physicists certainly can, and Walker is a fiercely articulate member of that generation of scientists and philosophers — physicists David Deutsch and Chiara Marletto and the chemist Leroy Cronin are others — who are out to “choose life”, transforming physics in the light of evolution.

We’re used to thinking that living things are the product of selection. Walker wants us to imagine that every object in the universe, whether living or not, is the product of selection. She wants us to think of the evolutionary history of things as a property, as fundamental to objects as charge and mass are to atoms.

Walker’s defence of her “assembly theory” is a virtuoso intellectual performance: she’s like the young Daniel Dennett, full of wit, mischief and bursts of insolent brevity which for newcomers to this territory are like oases in the desert.

But to drag this back to where we started: the search for extraterrestrial life — did you know that there isn’t enough stuff in the universe to make all the small molecules that could perfom a function in our biology? Even before life gets going, the chemistry from which it is built has to have been massively selected — and we know blind chance isn’t responsible, because we already know what undifferentiated masses of small organic molecules look like; we call this stuff tar.

In short, Walker shows us that what we call “life” is but an infinitesimal fraction of all the kinds of life which may arise out of any number of wholly unfamiliar chemistries.

“When we can run origin-of-life experiments at scale, they will allow us to predict how much variation we should expect in different geochemical environments,” Walker writes. So once again, we have to wait, even more piqued and anxious than before, to meet aliens even stranger than we have imagined or maybe can imagine.

Cabrol, in her own book, makes life even more excruciating for those of us who just want to shake hands with E.T.: imagine, she says, “a shadow biome” of living things so strange, they could be all around us here, on Earth — and we would never know.

Benignant?

Reading the Watermark by Sam Mills for the Times

“Every time I encounter someone,” celebrity novelist Augustus Fate reveals, near the start of Sam Mills’s new novel The Watermark, “ I feel a nagging urge to put them in one of my books.”

He speaks nothing less than the literal truth. Journalist and music-industry type Jaime Lancia and his almost-girlfriend, a suicidally inclined artist called Rachel Levy, have both succumbed to Fate’s drugged tea, and while their barely-alive bodies are wasting away in the attic of his Welsh cottage, their spirits are being consigned to a curious half-life as fictional characters. It takes a while for them to awake to their plight, trapped in Thomas Turridge, Fate’s unfinished (and probably unfinishable) Victorianate new novel. The malignant appetites of this paperback Prospero have swallowed rival novelists, too, making Thomas Turridge only the first of several ur-fictional rabbit holes down which Jaime and Rachel must tumble.

Over the not inconsiderable span of The Watermark, we find our star-crossed lovers evading asylum orders in Victorian Oxford, resisting the blandishments of a fictional present-day Manchester, surviving spiritual extinction in a pre-Soviet hell-hole evocatively dubbed “Carpathia”, and coming domestically unstuck in a care robot-infested near-future London.

Meta-fictions are having a moment. The other day I saw Bertrand Bonello’s new science fiction film The Beast, which has Léa Seydoux and George MacKay playing multiple versions of themselves in a tale that spans generations and which ends, yes, in a care-robot-infested future. Perhaps this coincidence is no more than a sign of the coming-to-maturity of a generation who (finally!) understand science fiction.

In 1957 Philip Dick wrote a short sweet novel called Eye in the Sky, which drove its cast through eight different subjective realities, each one “ruled” by a different character. While Mills’s The Watermark is no mere homage to that or any other book, it’s obvious she knows how to tap, here and there, into Dick’s madcap energy, in pursuit of her own game.

The Watermark is told variously from Jaime and Rachel’s point of view. In some worlds, Jaime wakes up to their plight and must disenchant Rachel. In other worlds, Rachel is the knower, Jaime the amnesiac. Being fictional characters as well as real-life kidnap victims, they must constantly be contending with the spurious backstories each fiction lumbers on them. These aren’t always easy to throw over. In one fiction, Jaime and Rachel have a son. Are they really going to abandon him, just so they can save their real lives?

Jaime, over the course of his many transmogrifications, is inclined to fight for his freedom. Rachel is inclined to bed down in fictional worlds that, while existentially unfree, are an improvement on real life — from which she’s already tried to escape by suicide.

The point of all this is to show how we hedge our lives around with stories, not because they are comforting (although they often are) but because stories are necessary: without them, we wouldn’t understand anything about ourselves or each other. Stories are thinking. By far the strongest fictional environment here is 1920s-era Carpathia. Here, a totalitarian regime grinds the star-crossed couple’s necessary fictions to dust, until at last they take psychic refuge in the bodies of wolves and birds.

The Watermark never quite coheres. It takes a conceit best suited to a 1950s-era science-fiction novelette (will our heroes make it back to the real world?), couples it to a psychological thriller (what’s up with Rachel?), and runs this curious algorithm through the fictive mill not once but five times, by which time the reader may well have had a surfeit of “variations on a theme”. Rightly, for a novel of this scope and ambition, Mills serves up a number of false endings on the way to her denouement, and the one that rings most psychologically true is also the most bathetic: “We were supposed to be having our grand love story, married and happy ever after,” Rachel observes, from the perspective of a fictional year 2049, “but we ended up like every other screwed-up middle-aged couple.”

It would be easy to write off The Watermark as a literary trifle. But I like trifle, and I especially appreciate how Mills’s protagonists treat their absurd bind with absolute seriousness. Farce on the outside, tragedy within: this book is full of horrid laughter.

But Mills is not a natural pasticheur, and unfortunately it’s in her opening story, set in Oxford in 1861, that her ventriloquism comes badly unstuck. A young woman “in possession of chestnut hair”? A vicar who “tugs at his ebullient mutton-chops, before resuming his impassioned tirade”? On page 49, the word “benignant”? This is less pastiche, more tin-eared tosh.

Against this serious failing, what defences can we muster? Quite a few. A pair of likeable protagonists who stand up surprisingly well to their repeated eviscerations. A plot that takes storytelling seriously, and would rather serve the reader’s appetites than sneer at them. Last but not least, some excellent incidental invention: to wit, a long-imprisoned writer’s idea of what the 1980s must look like (“They will drink too much ale and be in possession of magical machines”) and, elsewhere, a mother’s choice of bedtime reading material (“The Humanist Book of Classic Fairy Tales, retold by minor, marginalised characters”) .

But it’s as Kurt Vonnnegut said: “If you open a window and make love to the world, so to speak, your story will get pneumonia.” To put it less kindly: nothing kills the novel faster than aspiration. The Watermark, that wanted to be a very big book about everything, becomes, in the end, something else: a long, involved, self-alienating exploration of itself.