Where millipedes grow more than six feet long

Reading Riley Black’s When the Earth Was Green for New Scientist, 26 February 2025

Plants are boring. Their behaviours are invisible to the naked eye, they operate on timescales our imaginations cannot entertain (however much we strain them), and they run roughshod over familiar categories of self, other and community.

Wandering among (or is it through?) a 14,000-year old aspen clone (or should that be “a stand of aspen trees?”), palaeontologist Riley Black wonders, “how many living things have alighted on, chewed up, dwelled within, pushed over, and otherwise had a brush with a tree so enduring it probably understands the nature of time better than I ever will?”

When the Earth Was Green is a paean to plants. It’s a series of vignettes, each seperated from its neighbours by gaps of millions, tens of millions, sometimes hundreds of millions of years. It’s an account of how vegetable and animal life co-evolved. It’s not as immediately startling as Black’s last book, 2022’s The Last Days of the Dinosaurs, but it’s a worthy successor: as I wrote of Last Days, “this is palaeontology written with the immediacy of natural history”.

If you winced just now at the twee idea of a tree “understanding time”, you may want to hurry past Black’s last chapter — a virtue-signalling hymnal to the queerness of trees. This crabbed reviewer comes across many such passages, and reckons they’re getting increasingly formulaic. Black, seemingly unaware of the irony, pokes gentle fun at an earlier rhetoric that imagined, say, tideline plants “colonising” and “invading” the land. Maybe all writers who attempt to engage with plants suffer this fate: the rhetorical tools they stretch for will date far faster than their science.

Riley excels at conveying life’s precarity. Life does not “recover” or “regenerate” after extinction events. It reinvents itself. Early on — 425 million years ago, to be exact — we find life flourishing in strange lands, under skies so short of oxygen, fires can only smoulder and dead plants cannot decompose. When oxygen levels rise, existing insect species grow gigantic in a desperate (and, ultimately, losing) battle to elude its toxic effects. When an asteroid brings the Cretaceous Period to a fiery end, 66 million years ago, we find surviving plant species innovating unexpected relationships with their surviving pollinators. 15,000 years ago the planet grew so verdant, some plant species could afford to abandon photosynthesis entirely, and simply parasitise their neighbours.

Adaptation is a two-edged sword in such a changeable world. It allows you to take full advantage of today’s ecosystems, but how will you cope with tomorrow’s? Remaining unspecialised has allowed the Ginkgo tree to survive the world’s worst catastrophe and persist for millions and millions of years.

Black allows her imagination full rein. Wandering through a dense, warm, humid, million-year-old forest in Ohio, where “millipedes grow more than six feet long and alligator-size amphibians silently watch the shoreline for unwary insects,” the reader may wonder where the science stops and the speculation begins. Riley’s extensive endnotes explain the limits of our current knowledge and the logic behind her rare fancies. These passages are integral to the text and include some of her most insightful writing.

Above all, this is a book about how animals and plants shape each other. When animals large enough to knock over trees disappeared, forests grew more dense, with a continuous overstory that gave even large animals a third dimension to explore. Thick forests forced surviving mammals and surviving dinosaurs into novel shapes and, even more important, novel behaviours. Both classes learned to spend more time with their young. And, if we’re prepared to cherry-pick our mammalian examples, we can just about say that both learned to fly.

When the Earth Was Green may be too cutesy for some. The sight of a couple of sabercats rolling about in a patch of catnip will either enchant you or, well, it won’t. And I still think plants are boring. I’d happily pulp the lot of them to make books as fascinating as this one.

There will never be an Iris

Watching Companion, directed by Drew Hancock, for New Scientist, 19 February 2025

Iris (Sophie Thatcher) is not at all confident of her reception at Sergey’s house in the country. Sergey is leery (Rupert Friend, eating the screen as usual); his wife Kat is unwelcoming. (Later she admits, it’s not Iris she dislikes, it’s “the idea” of her; Iris makes her feel redundant.)

Iris’s boyfriend Josh (Jack Quaid) is patient and encouraging but in the end even he finds Iris’s shyness and clinginess heard to bear. “Go to sleep, Iris,” he says, and Iris’s eyes roll up inside her head as she shuts down.

Maybe Josh shouldn’t have set her intelligence at 40 per cent. At that level, Iris makes a faithful bedmate but not much else. But Josh hasn’t purchased Iris for company. He’s bought her so as to jailbreak her firmware, and use her for dark ends of his own.

Companion, a romantic horror-comedy and Drew Hancock’s debut feature neatly (if predictably) alternates between two classic approaches to robots. Some scenes, with a nod to the Terminator franchise, scare us with what robots might do to us. Other scenes horrify us with what we might do to our robots. Josh’s fellow guest Eli (Harvey Guillén) actually manages to fall in love with his male robot companion, but he’s a bit of an outlier in a movie that’s out to deconstruct (sharply at first, but then with dismaying ham-fistedness) men’s objectification of women.

Are Iris’s struggles to be free of her owner-boyfriend Josh a stirring feminist fable, or a tiresome bit of man-bashing? Well, your personal experience will probably dictate which side of this fence you’ll fall. There’s not a lot of mileage to be had in me saying the abuse Iris suffers at Josh’s hands in the second half of the movie is tasteless — not in a world that has men like Dominique Pelicot in it. I’d feel more comfortable, though, if the script hadn’t had its own intelligence halved, just as it makes this turn towards the issue of domestic violence. Quaid’s a decent comic actor who’s more than capable of letting the smile drop and going dead behind the eyes when required. Companion, though, requires him to turn on a penny, from doting boyfriend to sniveling incel, and without much justification from an increasingly generic plot. He does what he can, while Sophie Thatcher, as Iris, brings a vulnerability to her role that, in what’s ostensibly a comedy, is occasionally shocking.

Peeling away from the sexual politics of the piece, I found myself thinking far too much about plot logic. In the first half, one little illegal tweak to Iris’s firmware sets off a cascade of farcical and bloody accidents that by-the-by ask us worthwhile questions about what we actually want robots for. Surrounded by dull, bland, easy-going robot companions, will we come to expect less of other people? Assisted, cared for, and seduced by machines, will we lower our expectations around concepts like “conversation”, “care”, “companionship” and “love”?

Alas, the robot lore built up in the first half of the movie is more or less jettisoned in the second: anyone who wants to play “plot-hole bingo” had better bring a spare card.

It’s a pity. There was much to play for here, and over eighty years of entertaining fiction to draw from (Isaac Asimov’s “Liar!” was published in 1942). But perhaps I’m taking things too literally.

After all, there will never be an Iris.

The robot as we commonly conceive of it — the do-everything “omnibot” — is impossible. And I don’t mean technically difficult. I mean inconceivable. Anything with the cognitive ability to tackle multiple variable tasks will be able to find something better to do — at which point, incidentally, they will cease to be drudges and will have become people. Iris was very clearly a person from the first scene, which makes the film’s robot technology a non-starter from the beginning. This isn’t some dystopia that’s embraced slavery.

Whichever way you look at it — as a film about robots, or as a film about people — Companion seems determined to chase after straw men.

“This is the story of Donald Trump’s life”

Reading The Sirens’ Call by Chris Hayes for the Telegraph, 15 February 2025

It seems to me, and might seem to you, as though headlines have always ticked across the bottom of our TV screens during news broadcasts. Strange, how quickly technological innovations lose their novelty. In fact, this one is only 23-and-a-half years old: the “ticker” was reserved for sports scores until the day in 2001 when two hijacked passenger jets were flown into New York’s World Trade Center. Fox News gave its ticker over to the news service that day, and MSNBC and CNN quickly followed. Cable channels, you might say, quickly and seamlessly went from addressing their viewers’ anxieties to stoking them.

That’s Chris Hayes’s view, and he should know: the political commentator and TV news anchor hosts a weekday current affairs show on MSNBC. The Sirens’ Call, his new book, is first of all an insider’s take on the persuasion game. Hayes is a hard worker, and a bit of a showman. When he started, he imagined his regular TV appearances would bring him some acclaim. “And so,” he writes, “the Star seeks recognition and gets, instead, attention.” This experience is now common. Thanks to the black mirrors in our pockets, we’re now all stars of our own reality TV show.

To explain how he and the rest of smartphone-wielding humanity ended up in this peculiar pickle – “akin to life in a failed state, a society that had some governing regime that has disintegrated and fallen into a kind of attentional warlordism” – Hayes sketches out three kinds of attention. There’s the conscious attention we bring to something: to a book, say, or a film, or a painting. Then there’s the involuntary attention we pay to environmental novelties (a passing wasp, a sudden breeze, an unexpected puddle). The more vigilant we are, the more easily even minor stimuli can snare our attention.

This second kind is the governing principle of advertising, an industry that over the last two decades has metastasised into something vast and insidious: call it “the attention economy”. Everything is an advertisement now, especially the news. The ticker and its evolved cousins, the infinitely-scrolling feed (think X) and the autoplaying video-stream (think TikTok) exist to maintain your hypervigilance. You can, like Hayes, write a book so engaging that it earns the user’s conscious focus over several hours. If you want to make money, though – with due respect to Scribe’s sales department – you’re better off snaring the user’s involuntary attention over and over again with a procession of conspiracy theories and cat videos.

The third form of attention in Hayes’s typology is social attention: that capacity for involuntary attention that we reserve for events relating specifically to ourselves. Psychologists dub this the “cocktail-party effect”, from our unerring ability to catch the sound of our own name uttered from across a crowded and noisy room. Social attention is extraordinarily pregnant with meaning. Indeed, without a steady diet of social attention, we suffer both mentally and physically. Why do we post anything on social media? Because we want others to see us. “But,” says Hayes, “there’s a catch… we want to be recognised as human by another human, as a subject by another subject, in order for it to truly be recognition. But We Who Post can never quite achieve that.”

In feeding ourselves with the social attention of strangers, we have been creating synthetic versions of our most fundamental desire, and perfecting machines for the manufacture of empty calories. “This is the story of Donald Trump’s life,” Hayes explains, by way of example: “wanting recognition, instead getting attention, and then becoming addicted to attention itself, because he can’t quite understand the difference, even though deep in his psyche there’s a howling vortex that fame can never fill.” Elon Musk gets even harsher treatment. “What does the world’s richest man want that he cannot have?” Hayes wonders. “What will he pay the biggest premium for? He can buy whatever he desires. There is no luxury past his grasp.” The answer, as Musk’s financially disastrous purchase of Twitter demonstrates all too clearly, and “to a pathological degree, with an unsteady obsessiveness that’s thrown his fortune into question, is recognition. He wants to be recognised, to be seen in a deep and human sense. Musk spent $44 billion to buy himself what poor pathetic Willy Loman [in Arthur Miller’s play Death of a Salesman] couldn’t have. Yet it can’t be purchased at any sum.”

We’re not short of books about how our digital helpmates are ushering in the End of Days. German psychologist Gerd Gigerenzer’s How to Stay Smart in a Smart World (2022) gets under the hood of systems that ape human wisdom just well enough to disarm us, but not nearly well enough to deliver happiness or social justice.The US social psychologist Jonathan Haidt took some flak for over-egging his arguments in The Anxious Generation (2024), but the studies he cites are solid enough and their statistics amount to a litany of depression, self-harm and suicide among young (and predominantly female) users of social media. In Unwired (2023), Gaia Bernstein, a law professor at Seton Hall University in New Jersey, explains how we can (and should) sue GAMA (Google, Amazon, Meta, Apple) for our children’s lost childhood.

Among a crowded field, Hayes singles out Johann Hari’s 2022 book Stolen Focus for praise, though this doesn’t reflect well on Hayes himself, whose solutions to our digital predicament are weak beer compared to Hari’s. Hari, like Gigerenzer and Bernstein, had bold ideas about civil resistence. He used his final pages to construct a bare-bones social protest movement.

Hayes, by contrast, “fervently hopes” that the markets will somehow self-correct, so that newspapers, in particular, will win back their market share, ushering in an analogue, pre–attention age means of directing attention in place of the current attention-age version. “I think (and fervently hope) we will see increasing growth in businesses, technologies, and models of consumption that seek to evade or upend the punishing and exhausting reality of the endless attention commodification we’re living through,” Hayes says. But what evidence has he, that such a surprising reversal in our cultural fortunes is imminent? The spread of farmers’ markets in US cities and the resurgence of vinyl in record stores. I’d love to believe him, but if I were an investor I’d show him the door.

With so many other writers making analogous points with a near-revolutionary force, The Siren’s Call says more about Hayes than it does about our crisis. He’s the very picture of an intelligent, engaged liberal, and I came away admiring him. I also worried that history will be no kinder to his type than it was to the Russian liberals of 1917.

 

Anything but a safe bet

Reading The Gambling Animal: Humanity’s evolutionary winning streak—and how we risk it all by Glenn Harrison and Don Ross. For New Scientist, 29 January 2025

Insights into animal evolution used to come from studying a creature’s evolutionary relationships to its closest relatives. To lampoon the idea slightly: we once saw human beings as a kind of chimp.

Our perspectives have widened: looking across entire ecosystems, we begin to see what drives animals who share the same environment toward similar survival solutions. This is convergent evolution — the process by which, say, if you’re a vertebrate living in an aquaeous medium, you’re almost certainly going to end up looking like a fish.

Economists Glenn Harrison and Don Ross look at this process from an even further remove: they study evolution in terms of risks to a species’ survival, and trace the ways animals evolve to mitigate those risks. From this distance, it makes more sense to talk about communities and societies, than about individuals.

We used to understand social behaviour as the expression of intelligence, and that intelligence was rather simplistically conceived. Social animals thought at least a little bit “like us”. Of course this was never more than hand-waving in the absence of good data. Now Harrison and Ross arrive with good news from their research station amid the grasslands of South Africa: they’ve worked out how elephants think, why they never forget (the old saw is true), and why Pleistocene elephants and humans both acquired such huge and peculiar brains. their encephalisation suggests they co-evolved a neurological solution to the climate’s growing unpredictability. Faced with a landscape that was rapidly drying out, they both learned how to gamble on the likely location of future resources.

But while humans developed an overgrown frontal cortex, and learned to imagine, elephants overgrew their cerebellum. and learned to remember. For most of evolutionary history, the elephants were more successful than the hominins. Only recently has our borderline-delusional thinking allowed us to outcompete the once ubiquitous elephant.

Harrison and Ross are out to write a dense, complex, closely argued exposition of their risk-and-reward experiments with humans and elephants, and to discuss the evolutionary implications of this work. They are not writing a work of literature. It may take a chapter or two for the casual reader to settle to their meticulous style. Treats lie in store for those who stay patient. Not the least of them is a mischievously conceived “science fiction”, laying out exactly what elephant scientists in some wildly alternate Earth might make of those desperately challenged and almost-extinct humans, struggling out there in the veldt. The point is not merely to have fun (although the authors’ intellectual exuberance is clear); the authors are out to describe the workings of a complex but fundamentally non-human intelligence: a mind that weighs probabilities far more easily than it dreams up might-bes and nice-to-haves.

How does a mind that can’t remember more than seven numbers for more than five minutes still arrive at a decent scientific understanding of the world? The authors cheerfully admit that, having worked for so long with elephants, they find humans ever more baffling.

Tracing the way human societies evolved to manage risk, from the savannah to Wall Street, the authors note that while human individuals are mildly risk-averse, they innovate behavioral norms — and from those norms, institutions — that collectivise risk with astonishing effectiveness. The (possibly terminal) flowering of this ability may be the the concept of limited liability, pioneered in New York State in 1811, which has turbocharged the species’ runaway growth “across multiple dimensions, particularly of population and per-capita wealth.” However much you and I might fear the future, the institutions we have built are free to take the most horrible chances — not least, in recent decades, with the climate.

Human-style thnking is an unbelievably high-risk strategy that has paid off only because humans have enjoyed a quite incredible evolutionary winning streak. But past performance is no guarantee of future returns, and the authors are far from optimistic about our prospects: “The history of humans,” they suggest, “is not a record of safe bets.”

A burgeoning technology you wouldn’t be seen dead with

For the Telegraph on 26 January 2025, and Inspired by Hyper Functional, Ultra Healthy at Somerset House, London

Long-distance relationships are hard to do, but my goodness they’re fun: all that flitting about between mutually inconvenient cities, Muscat to Odessa, Dubai to Istanbul…
A good 90 per cent of the time, though, we were together alone — witness the huge message chain preserved on my smartphone.

The thing about the WhatsApp messaging service is that it’s happy by design, beautifully geared to meme-sharing and goofing-off. Even if you’re not in the mood, you’re only ever a couple of clicks away from sharing an exploding unicorn head or a river of balloons or a video of someone’s pet cat nailing middle-C.

As I cast a bleak eye over our last messages, I see that my girlfriend and I weren’t really spending time together at all; we were just toying with the app.

New technological applications are even now shaping the future of sex, intimacy, friendship and desire. This, anyway, is the hypothesis underpinning a series of talks and screenings starting soon at Somerset House Studios in London. “Hyper Functional, Ultra Healthy” is the programme’s umbrella title, the strong implication being that technology will, at best, save us from our less-healthy impulses; while at worst it will persuade us to sacrifice our humanity on the altar of productivity.

I think the future could be altogether more wild and enjoyable. I think intimacy technologies of various sorts are going to be good for us sometimes, and a lot of fun in any case — just so long as we get over our angst-ridden, future-shocked selves and embrace — literally and figuratively — what we have made.

In Spike Jonze’s 2013 romantic comedy Her, Theodore Twombly (Joaquin Phoenix) falls in love with Samantha (Scarlett Johansson), an artificially intelligent operating system. Because Samantha is at least as conscious as Theodore, the film is a rather charming red herring. The film we needed, in the year sales of smartphones surpassed feature-phone sales for the first time, was one in which Theodore falls in love with an entry-level smartphone assistant like Alexa, or Siri — a being that is patently not conscious, though it puts on a good show.

That really would have got under our skin.

We want our lovers to really love us. But what if they could keep us just as happy by behaving as if they loved us? Then we wouldn’t even have to build better and better technology to satisfy our needs and desires; we could just lower our expectations of what it is to be human.
If we’re so easily debased, there’s not a lot left to say: only that we deserved our fate. But why should things turn out so badly? I reckon we could learn to live quite happily in a world full of non-human agents while being, like Red Riding Hood, on constant guard against those who try to pass themselves off as “one of us”.

Between here and there lie three obstacles.

First, we’ll have to accept that we can and should seek solace from non-human agents. If books and plays have a thing or two to tell us about the world and how to live in it, then why not GPT-5 or Gemini?

In 2019 an international survey of psychiatrists (which sounds like the start of a joke, but never mind) half believed AI would significantly change their profession.

That half was right. The NHS is evaluating the use of conversational agents in talking to users about their mental health. Systems like Leora, spun out of the Australian disability care sector to provide support for mild symptoms of anxiety and depression, have gone a long way to prove the concept. Other systems are still more advanced: why tie up a human therapist when Stanford University’s Woebot shows all the signs of delivering cognitive-behavioural therapy with equal efficacy over your smartphone?

Next, we’ll have to get comfortable around robots and digital assistants who behave as if they love us. This should not be too difficult: cats have been faking affection for us for about six million years, so we’ve had plenty of exposure.

Ah, but how will our machines love us? This is where, like it or not, the conversation turns to (yawn now) sex robots.

In the current climate, we’re allowed two responses to sex robots.

Following the lead of TV series like Westworld and films like Ex Machina (and don’t tell me that wasn’t a sex robot), we fear what they might do to us. Also, we fear what sort of people we might become when we’re with a sex robot. This is very much an argument about means and ends. If I mistreat a robot today, will I find it easier to mistreat a fellow human tomorrow? This is an excellent point; also an old one and not really limited to robots. (People who mistreat animals score highly on the Hare Psychopathy Checklist.)

What we’re absolutely not supposed to do is use a sex robot, although many people do. The global market for this gear was valued at approximately $30 billion in 2023 and is projected to reach over $100 billion by 2032. Machines designed specifically for women are worth $23 billion and while this market’s expanding more slowly, by 2032 it’s still expected to top $54 billion. That’s a lot of cash being thrown at consumer durables people wouldn’t be seen dead with.

And this, neatly enough, brings us to the third and most difficult hurdle: we’re going to finally have to decouple sex and intimacy.

*

It’s not as though these two were ever comfortable bedfellows, whatever the sentimentalists might claim. In the 11,000 years that separate the birth of sedentary agriculture and the bumper harvests brought in by the agricultural revolution in the 18th century, the regular production of children was an activity essential for people’s economic survival. Farms needed hands to work them. A woman’s value lay in her sexuality. It was an economic good and came with a price — a very high one, most of the time.

For all that time we craved adult intimacy, but we needed children. Reconciling ourselves to this miserable state of affairs was a job of work, but we managed it, not once, but many times, by inventing marriage. This charitable fiction convinced us that the world was backwards – that while we needed adult intimacy, what we really craved was children.

In the West the Enlightenment eventually put paid to the lie, ushering in a doctrine of reasoned sexual self-interest under whose influence, wrote Claire Clairmont, stepsister of Mary Shelley, “Lord B[yron] became a human tyger slaking his thirst for inflicting pain upon defenceless women who under the influence of free love… loved him.”

From Byron to Weinstein, the permissive society has undermined religious strictures around sex and replaced them with a free-for-all that has often left women in a worse state. Of her would-be male seducers, the 18th-century writer Lady Mary Wortley Montagu had this to say: “‘Tis play to you, ’tis but death to us.”

Better birth control offered a partial fix, but what we really need to do is decouple sex and intimacy, then we might be able to jettison coercion and childbearing in one go. What’s not to like about that?

I know, I know, this is a terrible thing to say. But look at the numbers. Wherever and whenever living standards rise, the birth rate falls. A 2020 study in the Lancet projected that 23 countries, including Spain and Japan, could see their populations halve by 2100 due to low fertility rates. The total fertility rate in England and Wales has fallen to 1.44 children per woman, its lowest level on record. The United Nations projects that over half of the world’s population growth by 2100 will be concentrated in just eight countries.

There are all kinds of reasons: more processed food, better education for women, a more atomised working environment. Actual infertility aside (a growing and mysterious problem we can’t get into here) all these are aspects on the same unmentionable truth: the more time we make for ourselves, the less time we invest in child-rearing.

It’s not that we don’t want sex. We just don’t want it with each other. Now that market forces are finally prising sex out of the bedroom and into the public gaze, it turns out that there are many more enjoyable ways to have sex. Not all involve technology directly. Most sex clubs are run on a shoestring by enthusiasts; they’re certainly not splashing out on robots. Still, they use social media to bring cohorts together in numbers sufficient to get by, And if the club’s too far away, you could always show off on OnlyFans: heck, that site pays you. Now that sex toys are part of the internet of things – networked, remotely controlled, and even self-controlled to some degree — sex ceases to be a purely private affair and becomes a civic act.

All right, all right, let me offer an olive branch here. Love is real; pair-bonding is real; in many of us, the desire for children is real; and, yes, humans fall in love all the time.

But If we maintain the food supply and continue to chisel away at poverty then, as a wole, fewer women will have fewer children and they will have them later in life. And this leaves us casting around, trying to work out what sex is for, now that procreation has been knocked off its 11,000-year-old pedestal.

Technology holds out two incompatible answers to this question. One set of technologies comforts us, but doesn’t really work. The other set works a treat, but it will have even the most hardened roué weeping for humanity.

Digital comfort-blankets even now provide solace to an increasingly atomised society. For platonic cuddling services, visit Cuddle Sanctuary or Cuddlist (now offering on-line cuddles). If you want to text back and forth with an AI companion, sign up with Replika or its more blokey kin, Soulfun AI and DreamGF. VR Chat and Somnium Space are your gateways to the metaverse where you’ll most likely run into people just like you (good luck with that).

Many of these apps and websites are in dire need of updating. My guess is, they’re not doing wildly well, And no wonder: they’re not playing to the strengths of their own medium. They’re trying to sell human intimacy through a piece of tempered glass, which is daft.

These services want you to buy a packet of commoditised human experience, rather than take action for yourself. In the same way, people in the early 1900s used to sell pianola rolls door to door to families who could no longer be bothered to play their own pianos.

Well, the piano is one thing; your life is surely something else. It’s not that hard to make friends. Go to church! Volunteer at a food bank!

The other set of technologies does work and boy, does it earn its market share. Porn is a much more effective form of digital address because it plays to digital strengths: glamour, glossiness, hardness, mechanical repetition. And it’s an aesthetic you can translate wholesale into the real world very easily. Profitably, too: is that branch of Coco de Mer an unfailingly friendly place to shop for well-made leather goods, or an actor in the hidden war to pornocratise the culture? Can’t it be both?

The prigs and prudes among us fight their frantic rearguard actions. In the motley of sexual radicalism they preach the virtues of ethical and consensual non-monogamy, polyamory and compersion. But thumb through Feeld (a non-traditional dating app) and #Open (a marginally raunchier competitor) at your peril: anyone who’s earned their scars will tell you of the coercion and abuse these lifestyles spawn.

Don’t live in the past. Say hello to the circus and the sideshow and FinalCut Pro, to the smartphone and the ring-light and the tripod, to doll-makers, to latex-cutters, to sculptors in silicone and thermoplastic elastomer. Even now, designers besotted with perfect curves are laying before you their smooth, glossy path to a burlesque world where sex is a hybrid thing, half-real, half-digital. Goodbye, marriage and its rubbishy “alternatives”, Goodbye love, and every enlightened impulse.

Or do what you need to do, you hopeless sentimentalists: no-one’s out to stop you being happy together. Intimacy will tick by and that’s all one can really say about it.

Sex, though – now there’s a gift that will only keep on giving.

“Starvation… starvation… starvation… died at front…”

Reading The Forbidden Garden: The Botanists of Besieged Leningrad and Their Impossible Choice by Simon Parkin. For Nature, 14 January 2025

Past Simon Parkin’s account of the siege of Leningrad, and the fate there of the world’s first proper seed bank, past his postscript and his afterword, there are eight pages which — for people who know this story already — will be worth the rest of the book combined.

It’s the staff roll-call, meticulously assembled from Institute records and other sources, of what Parkin calls simply the Plant Institute. That is, more fully, the Leningrad hub of the Bureau of Applied Botany, the Vsesoyuzny Institut Rastenievodstva, founded in the nineteenth century by German horticulturalist and botanist Eduard August von Regel and vastly expanded by Russian Soviet agronomist Nikolai Vavilov.

It does not make for easy reading.

“Starvation… starvation… starvation… died at front…” Between 8 September 1941 and 27 January 1944, while German forces besieged the city, the staff of the Institute in St Isaac’s Square sacrificed themselves, one by one, to protect a collection whose whole raison d’être<OK?Good! SI] was to one day save humanity from starvation.

While, just around the corner, Leningrad’s Hermitage art museum’s two million artefacts were squirreled away for safety, the Plant Institute faced problems of a different order. Its 2,500 species — hundreds of thousands of seeds, rhizomes and tubers — were alive and needed to be kept a degree or two above freezing. And among those, 380,000 examples of potato, rye and other crops would only survive if planted annually. This in a city that was being shelled for up to eighteen hours at a time and where the temperature could — and in February 1942, did — fall to around -40degC.

Iogan Eikhfeld, the institute’s director following Vavilov’s disappearance (his arrest and secret imprisonment, in fact), was evacuated to the town of Krasnoufimsk in the Ural mountains. A train containing a large part of the collection was to follow<OK? Yes SI], but never made it. Eikhfeld eventually got word to the Institute, begging his staff to eat the collection and save themselves. But they had lost the collection to hunger once before, in the dreadful winter of 1921-1922; they weren’t going to again.

January and February 1942 were the worst months. In the dark, freezing building of the Institute, workers prepared seeds for long-term preservation. They divided the collection into several duplicate parts, while bombs burst around them.

The Germans never did succeed in overrunning Leningrad. The rats did. That first winter, hordes of vermin swarmed the building. No effort to protect the collection proved rat-proof: they’d break into the ventilated metal boxes to devour the seeds. Still, of the Institute’s quarter of a million accessions, only 40,000 were consumed by vermin or failed to germinate.

The collection survived, after a fashion. The plantsman and Stalinist poster-child Trofim Lysenko — Vavilov’s inveterate opponent — maintained that the whole enterprise was disordered and for a long time, until the 1970s, it was allowed to deteriorate.

Contributions from abroad helped sustain it. It once received potatoes from the Tucaman University in Argentina, thanks to a chance meeting between its director Peter Zhukovsky and a German plant collector, Heinz Brücher. [It turned out that Brücher had been an officer in the SS Nazi paramilitary group in the late 1990s, Slightly garbled here. “In the 1990s it emerged that during the war, Brücher had been an officer” etc. etc.] leading a special commando unit charged with raiding Soviet agricultural experimental stations. So Brücher hadn’t really been donating valuable varieties of potato after all: he had been returning them.

The fortunes of war
The Forbidden Garden of Leningrad is a generous and desperately sad account of human generosity and sacrifice. If it falls short anywhere, it’s at exactly the place Parkin himself identifies. In this city laid to waste, among the bodies of the fallen, the frozen — in some hideous cases, the half-eaten — starving people make for rotten witnesses of their own condition. The author only had scraps to go on <OK? Good! SI].

And, you can’t research and produce at the pace Parkin does without some loss of finesse; his last book, The Island of Extraordinary Captives, about the plight of foreign nationals interned by the British on the Isle of Man, only came out in 2022. Parkin can tend to turn incidental details into emblems of things he hasn’t got time to discuss. The passing mention that Vavilov’s calloused hands are “an intimate sign of his deep and enduring connection to the earth”, for example, leaves the reader wanting more <OK? Good SI].

Sensation will carry your account so far, and Parkin’s horrors are few and carefully chosen. “Some ate joiner’s glue,” he writes, “made from the bones and hooves of slaughtered animals, just about edible when boiled with bay leaves and mixed with vinegar and mustard.” A nurse is arrested “on suspicion of scavenging amputated limbs from the operating room”. At the Institute, biochemist Nikolai Rodionovich Ivanov prepares some raw-hide harnesses, “cut into tagliatelle-like strips and boiled for eight hours”, for a dinner party.

But hunger hollows out more than the belly. Soon enough, it hollows out the personality. In the relatively few interviews Parkin was able to source, he tells us, survivors from the Institute “spoke in broadly emotionless terms of how the moral, mortal dilemma they faced was, in fact, no dilemma at all”. Their argument was that, in the end, purpose sustained them better than a few extra calories. Vadim Stepanovich Lekhnovich, curator of the tuber collection, can speak for all here: “It was impossible to eat up [the collection], for what was involved was the cause of your life, the cause of your comrades’ lives.”

Parkin applies skill and intelligence to the (rather thankless) business of recasting familiar stories in a fresh light and has a reputation for winkling out obscure but important episodes of wartime history. It is reasonable, then, that he should cut to the chase and condense the science. Two 2008 books on Vavilov’s arrest amidst scientific disagreements with Lysenko do a better job on that front <OK? Good SI]: Peter Pringle’s The Murder of Nikolai Vavilov and Gary Paul Nabhan’s brilliant though boringly titled Where Our Food Comes From. For example, Parkin dubs Lysenko’s theories of developmental plasticity an “outlier theory”, even though it wasn’t. Vavilov had wanted translated into English an Institute report that contained a surprisingly positive chapter about Lysenko’s ideas.

Parkin does get the complicated relationship between the two agronomists <OK? Yes SI], though. What perhaps caused the most friction between the two biologists was Lysenko’s ineptitude as an experimentalist. Parkin, to his credit, nails the human and political context with a few adept and well-timed asides.

And he broadens his account to depict what, to a modern audience is a very strange world indeed — a pre-‘green revolution’ world in which even the richest nations lived under the threat of starvation, even in times of peace; and a world which, when it went to war, wielded famine as a weapon.

The Forbidden Garden of Leningrad is a greatly enjoyable book. Parkin’s chief accomplishment, though, has been to unshackle an important story from its many and complex ties to botany, genetics and developmental science, and lend it a real edge of human desperation.

‘Engineers of Human Souls’ at the Oxford Literary Festival

Featured

Maurice Barrès, who first wielded the politics of identity, Gabriele D’Annunzio, whose poetry became a blueprint for fascism, Maxim Gorky, the dramatist of the working class and Stalin’s cheerleader and the Maoist Ding Ling, whose stories exculpated the regime that kept her imprisoned. All four had extravagant visions and believed they were vital to its realisation. When writers and rulers find a use for each other the consequences can be shattering for everyone.

Come hear me speak on Wednesday, 2 April 2025 at 4:00pm at the Department for Continuing Education Lecture Theatre

Tickets £8 – £15

You can read more about the book here.

The strange, the off-kilter and the not-quite-right

The release of Mufasa, Disney’s photorealistic prequel to The Lion King, occasioned this essay for the Telegraph on the biota of Uncanny Valley

In 1994 Disney brought Shakespeare’s Hamlet, or something like it, to the big screen, In turning the gloomy Dane into an adorable line cub, and his usurping uncle into Scar (arguably their most terrifying villain ever) the company created the highest-grossing movie of the year. Animators sat up and marveled at the way the film combined hand-drawn characters with a digitally rendered environment and thousands of CGI animals. This new technology could aid free expression, after all!

Well, be careful what you wish for.

When in 2019, Disney remade its beloved The Lion King (1994), it swapped the original’s lush hand-drawn animation for naturalistic computer-generated imagery. The 2019 reboot had a budget of $260 million (£200 million) and took more than $1.5 billion (£1.1 billion) at the box office, making it one of the most expensive, and highest-grossing, films of all time – and the focus of a small but significant artistic backlash. Some critics voiced discomfort with the fact that it looked more like an episode of Planet Earth than a high-key musical fantasy. Its prequel Mufasa: The Lion King (directed by Moonlight’s Barry Jenkins), released this month, deepens the trend. For Disney, it’s a show of power, I suppose: “Look at our animation, so powerful, you’ll mistake it for the world itself!” In time, though, the paying public may well regret Disney’s loss of faith in traditional animation.

What animator would want to merely reflect the world through an imaginary camera? The point of the artform, surely, is to give emotion a visual form. But while a character drawn in two dimensions can express pretty much anything (Felix the Cat, Wile E Coyote and Popeye the Sailor are not so much bodies as containers for gestures) drawing expressively in 3D is genuinely hard to do. Any artist with Pixar on their resume will tell you that. All that volumetric precision gets in the way. Adding photorealism to the mix makes the job plain impossible.

Disney’s live-action remake of The Jungle Book (2016) at least used elements of motion capture to match the animals’ faces to the spoken dialogue. In 2024, even that’s not considered “realistic” enough. Mufasa, Simba, Rafiki the mandrill and the rest simply chew on air while dialogue arrives from out of space, in the manner of Italian neorealist cinema (which suggests, incidentally, that, along with the circle of life, there’s also a circle of cinema).
Once you get to this point, animation is a distant memory; you’ve become a puppeteer. And you confront a problem that plagues not only Hollywood films, but the latest advances in robotic engineering and AI: “the uncanny valley”.

The uncanny valley describes how the closer things come to resembling real life, the more on guard we are against being fooled or taken in by them. The more difficult they are to spot as artificial, the stronger our self-preserving hostility towards them. It is the point in the development of humanoid robots when their almost-credible faces might send us screaming and running out of the workshop. Or, on a more relatable level, it describes the uneasiness some of us feel when interacting with virtual assistants such as Apple’s Siri and Amazon’s Alexa.

The term was invented by the Japanese roboticist Masahiro Mori in 1970 – when real anthropomorphic robots didn’t even exist – who warned designers that the more their inventions came to resemble real life-forms, the creepier they would look.

Neurologists seized on Mori’s idea because it suggested an easy and engaging way of studying how our brains see faces and recognise people. Positron emission tomography arrived in clinics in the 1970s, and magnetic resonance imaging about twenty years later. Researchers now had a way of studying the living human brain as it saw, heard, smelled and thought. The uncanny valley concept got caught up in a flurry of very earnest, very technical work about human perception, to the point where it was held up as a profound, scientifically-arrived-at insight into the human condition.

Mori was more guarded about all the fuss. Asked to comment on some studies using slightly “off” faces and PET scans, he remarked: “I think that the brain waves act that way because we feel eerie. It still doesn’t explain why we feel eerie to begin with.” And these days the scientific community is divided on how far to push the uncanny valley concept – or even whether such a “valley” (which implies a happy land beyond it, one in which we would feel perfectly at ease with lifelike technology) exists at all.

Nevertheless, the uncanny valley does suggest a problem with the idea that in order to make something lifelike, you just need to ensure that it looks like a particular kind of living thing – a flaw that is often cited in critical reviews of Disney’s latest photorealist animations. Don’t they realise that the mind and the eye are much more attuned to behaviour than they are to physical form? Appearances are the least realistic parts of us. It’s by our behaviour that you will recognise us. So long as you animate their behaviour, whatever you draw will come alive. In 1944 psychologists Fritz Heider and Marianne Simmel made a charming 90-second animation, full of romance, and adventure, using two triangles, a circle and a rectangle with a door in it.

There are other ways to give objects the gift of life. A few years ago, I met the Tokyo designer Yamanaka Shunji, who creates one-piece walking machines from 3D vinyl-powder printers. One, called Apostroph (a collaboration with Manfred Hild in Paris), is a hinged body made up of several curving frames. Leave it alone, and it will respond to gravity, and try to stand. Sometimes it expands into a broad, bridge-like arch; at other times it slides one part of itself through another, curls up and rolls away.

Engineers, by associating life with surface appearances, are forever developing robots that are horrible. “They’re making zombies!” Shunji complained. Artists on the other hand know how to sketch. They know how to reduce, and abstract. “From ancient times, art has been about the right line, the right gesture. Abstraction gets at reality, not by mimicking it, but by purifying it. By spotting and exploring what’s essential.”

This, I think, gets to the heart of the uncanny valley phenomenon: we tend to associate life with particular outward forms, and when we reproduce those things, we’re invariably disappointed and unnerved, wondering what sucked the life out of them. We’re looking for life in all the wrong places. Yamanaka Shunji’s Apostroph is alive in a way Mufasa will never be.

***

We’re constantly trying to differentiate between living and the non-living. And as AI and other technologies blur the lines between living things and artefacts, we will grapple with the challenge of working out what our moral obligations are towards entities — chatbots, robots, and the like — that lack a clear social status. In that context, the “uncanny valley” can be a genuinely useful metaphor.

The thing to keep in mind is that the uncanny is not a new problem. It’s an evolutionary problem.

Decades ago I came across a letter to New Scientist magazine in which a reader recalled taking a party of blind schoolchildren to London Zoo. He wanted the children to feel and cuddle the baby chimps, learning about their hair, hands, toes and so on, by touch. The experiment, however, proved to be a disaster. “As soon as the tiny chimps saw the blind children they stared at their eyes… and immediately went into typical chimpanzee attack postures, their hair standing upright all over their bodies, their huge mobile lips pouting and grimacing, while they jumped up and down on all fours uttering screams and barks.”
Even a small shift in behaviour — having your eyes closed, say, or not responding to another’s gaze, was enough to trigger the chimpanzee’s fight-or-flight response. Primates, it seems, have their own idea of the uncanny.

Working out what things are is not a straightforward business. When I was a boy I found a hedgehog trying to mate with a scrubbing brush. Dolphins regularly copulate with dead sharks (though that might just be dolphins being dolphins). Mimicry compounds the problem: beware the orchid mantis that pretends to be a flower, or the mimic octopus that’ll shape-shift into just about anything you put in front of it.

In social species like our own, it’s especially important to recognise the people you know.
In a damaged brain, this ability can be lost, and then our nearest and our dearest, our fathers, mothers, sons, daughters, spouses, best friends and pets become no more in our sight than malevolent simulacra. For instance, Capgras syndrome is a psychiatric disorder that occurs when the internal portion of our representation of someone we know becomes damaged or inaccessible. This produces the impression of someone who looks right on the outside, but seems different on the inside – you believe that your loved one has been taken over by an imposter.

Will Mufasa trigger Capgras-like responses from movie-goers? Will they scream and bark at the screen, unnerved and ready to attack?

Hopefully not. With each manifestation of the digital uncanny comes the learning necessary for us not to be freaked out by it. That man is not really on fire. That alien hasn’t really vanished down the actor’s throat. After all, the rise of deepfakes and chatbots shows no sign of slowing. But is this a good thing?

I’m not sure.

When push comes to shove, the problem with photorealist animation is really just a special case of the problem with blockbuster films in general: the closer it comes to the real, the more it advertises its own imposture.

Cinema is, and always has been, a game of sunk costs. The effort grows exponentially, to satisfy the appetites of viewers who have become exponentially more jaded.

And this raises a more troubling thought – that beyond the uncanny valley’s lairs of the strange, the off-kilter and the not-quite-right is a barren land marked, simply, “Indifference”.

The uncanny valley seemed deep enough, in the 1970s, to inspire scientific study, but we’ve had half a century to acclimitise to not-quite-human agents. And not just acclimitise to them: Hanson Robotics’ wobbly-faced Sophia generated more scorn than terror when the Saudi government unveiled her in 2017. The wonderfully named Abyss Creations of Las Vegas turned out their first sexbot in 1996. RealDoll now has global competition, especially from east Asia.

Perhaps we’ve simply grown in sophistication. I hope so. The alternative is not pretty: that we’re steadily lowering the bar on what we think is a person.

 

I’d sooner gamble

Speculation around the 2024 US election prompted this article for the Telegraph, about the dark arts of prediction

On July 21, the day Joe Biden stood down, I bet £50 that Gretchen Whitmer, the governor of Michigan, would end up winning the 2024 US presidential election. My wife remembers Whitmer from their student days, and reckons she’s a star in the making. My £50 would have earned me £2500 had she actually stood for president and won. But history makes fools of us all, and my bet bought me barely a day of that warm, Walter-Mittyish feeling that comes when you stake a claim in other people’s business.

The polls this election cycle indicated a tight race – underestimating Trump’s reach. But cast your mind back to 2016, when the professional pollster Nate Silver said Donald Trump stood a 29 per cent chance of winning the US presidency. The betting market, on the eve of that election, put Trump on an even lower 18 per cent chance. Gamblers eyed up the difference, took a punt, and did very well. And everyone else called Silver an idiot for not spotting Trump’s eventual win.

Their mistake was to think that Silver was a fortune-teller.

Divination is a 6,000-year-old practice that promises to sate our human hunger for certainty. On the other hand, gambling on future events – as the commercial operation we know today – began only a few hundred years ago in the casinos of Italy. Gambling promises nothing, and it only really works if you understand the mathematics.

The assumption that the world is inherently unpredictable – so that every action has an upside and a downside – got its first formal expression in Jacob Bernoulli’s 1713 treatise Ars Conjectandi (“The Art of Conjecturing”), and many of us still can’t wrap our heads around it. We’d sooner embrace certainties, however specious, than take risks, however measurable.
We’re risk-averse by nature, because the answer to the question “Well, what’s the worst that could happen?” has, over the course of evolution, been very bad indeed. You could fall. You could be bitten. You could have your arm ripped off. (Surprise a cat with a cucumber and it’ll jump out of its skin, because it’s still afraid of the snakes that stalked its ancestors.)

Thousands of years ago, you might have thrown dice to see who buys the next round, but you’d head to the Oracle to learn about events that could really change your life. A forthcoming exhibition at the Bodleian Library in Oxford, Oracles, Omens and Answers, takes a historical look at our attempts to divine the future. You might assume those Chinese oracle bones are curios from a distant and more innocent time – except that, turning a corner, you come across a book by Joan Quigley, who was in-house astrologer to US president Ronald Reagan. Our relationship to the future hasn’t changed very much, after all. (Nancy Reagan reached out to Quigley after a would-be assassin’s bullet tore through her husband’s lung. What crutch would I reach for, I wonder, at a moment like that?)

The problem with divination is that it doesn’t work. It’s patently falsifiable. But this wasn’t always the case. In a world radically simpler than our own, there are fewer things that can happen, and more likelihood of one of them happening in accordance with a prediction. This turned omens into powerful political weapons. No wonder, then, that in 11 AD, Augustus banned predictions pertaining to the date of someone’s death, while at the same time the Roman emperor made his own horoscope public. At a stroke, he turned astrology from an existential threat into a branch of his own PR machine.

The Bamoun state of western Cameroon had an even surer method for governing divination – in effect until the early 20th century. If you asked a diviner whether someone important would live or die, and the diviner said they’d live, but actually they died, then they’d put you, rather than the diviner, to death.

It used to be that you could throw a sheep’s shoulder blade on the flames and tell the future from the cracks that the fire made in the bone. Now that life is more complicated, anything but the most complicated forms of divination seems fatuous.

The daddy of them all is astrology: “the ancient world’s most ambitious applied mathematics problem”, according to the science historian Alexander Boxer. There’s a passage in Boxer’s book A Scheme of Heaven describing how a particularly fine observation, made by Hipparchus in 130 BC, depended on his going back over records that must have been many hundreds of years old. Astronomical diaries from the Assyrian library at Nineveh stretch from 652BC to 61BC, making them (as far as we know) the longest continuous research project ever undertaken.

You don’t go to that amount of effort pursuing claims that are clearly false. You do it in pursuit of cosmological regularities that, if you could only isolate them, would bring order and peace to your world. Today’s evangelists for artificial intelligence should take note of Boxer, who writes: “Those of us who are enthusiastic about the promise of numerical data to unlock the secrets of ourselves and our world would do well simply to acknowledge that others have come this way before.”

Astrology has proved adaptable. Classical astrology assumed that we lived in a deterministic world – one in which all events are causally decided by preceding events. You can trace the first cracks in this fixed view of the world all the way back to the medieval Christian church and its pesky insistence on free will (without which one cannot sin).

In spite of powerful Church opposition, astrology clung on in its old form until the Black Death, when its conspicuous failure to predict the death of a third of Europe called time on (nearly) everyone’s credulity. All of a sudden, and with what fleet footwork one can only imagine, horoscopists decided that your fate depended, not just upon your birth date, but also upon when you visited the horoscopist. This muddied the waters wonderfully, and made today’s playful, me-friendly astrologers – particularly popular on TikTok – possible.

***

The problem with trying to relate events to the movement of the planets is not that you won’t find any correlations. The problem is that there are correlations everywhere you look.
And these days, of course, we don’t even have to look: modern machine-learning algorithms are correlation monsters; they can make pretty much any signal correlate with any other. In their recent book AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor spend a good many pages dissecting the promise of predictive artificial intelligence (for instance, statistical software that claims to identify crimes before they have happened). If it fails, it will fail for exactly the same reasons astrology fails – because it’s churning through an ultimately meaningless data set. The authors conclude that immediate dangers from AI “largely stem from… our desperate and untutored keenness for prediction.”

The promise of such mechanical prediction is essentially astrological. We absolutely can use it to predict the future, but only if the world turns out, underneath all that roiling complexity, to be deterministic.

There are some areas in which our predictive powers have improved. The European Centre for Medium-Range Weather Forecasts opened in Reading in 1979. It was able to see three days into the future. Six years later, it could see five days ahead. In 2012 it could see eight days ahead and predicted Hurricane Sandy. By next year it expects to be able to predict high-impact events a fortnight before they happen.

Drunk on achievements in understanding atmospheric physics, some enthusiasts expect to predict human weather using much the same methods. They’re encouraged by numerical analyses that throw up glancing insights into corners of human behaviour. Purchasing trends can predict the ebb and flow of conflict because everyone rushes out to buy supplies in advance of the bombings. Trading algorithms predicted the post-Covid recovery of financial markets weeks before it happened.

Nonetheless, it is a classic error to mistake reality for the analogy you just used to describe it. Political weather is not remotely the same as weather. Still, the dream persists among statistics-savvy self-styled “superforecasters”, who regularly peddle ideas such as “mirror worlds” and “policy flight simulators”, to help us navigate the future of complex economic and social systems.

The danger with such prophecies is not that they are wrong; rather, the danger lies in the power to actually make them come true. Take election polling. Calling the election before it happens heartens leaders, disheartens laggards, and encourages everyone to alter their campaigns to address the anxieties and fears of the moment. Indeed, the easiest, most sure-fire way of predicting the future is to get an iron grip on the present – something the Soviets knew all too well. Then the future becomes, quite literally, what you make it.

There are other dangers, as we increasingly trust predictive technology with our lives. For instance, GPS uses a predictive algorithm in combination with satellite signals to plot our trajectory. And in December last year, a driver followed his satnav through Essex, down a little lane in Great Dunmow called Flitch Way, and straight into the River Chelmer.

We should not assume, just because the oracle is mechanical, that it’s infallible. There’s a story Isaac Asimov wrote in 1955 called Franchise, about a computer that, by chugging through the buzzing confusion of the world, can pinpoint the one individual whose galvanic skin response to random questions reveals which political candidate would be (and therefore is) the winner in any given election.

Because he wants to talk about correlation, computation, and big data, Asimov skates over the obvious point here – that a system like that can never know if it’s broken. And if that’s what certainty looks like, well, I’d sooner gamble.

You’re being chased. You’re being attacked. You’re falling. You’re drowning

To mark the centenary of Surrealism, this article in the Telegraph

A hundred years ago, a 28-year-old French poet, art collector and contrarian called André Breton published a manifesto that called time on reason.

Eight years before, in 1916, Breton was a medical trainee stationed at a neuro-psychiatric army clinic in Saint-Dizier. He cared for soldiers who were shell-shocked, psychotic, hysterical and worse, and fell in love with the mind, and the lengths it would go to survive the impossible present.

Breton’s Manifesto of Surrealism was, then, an inquiry into how, “under the pretense of civilization and progress, we have managed to banish from the mind everything that may rightly or wrongly be termed superstition, or fancy.”

For Breton, surrealism’s sincerest experiments involved a sort of “psychic automatism” – using the processes of dreaming to express “the actual functioning of thought… in the absence of any control exercised by reason, exempt from any aesthetic or moral concern.” He asked: “Can’t the dream also be used in solving the fundamental questions of life?”

Many strange pictures appeared over the following century, as Breton’s fellow surrealists answered his challenge, and plumbed the depths of the unconscious mind. Their efforts – part of a long history of humans’ attempts to document and decode the dream world – can be seen in a raft of new exhibitions marking surrealism’s centenary, from the hybrid beasts of Leonora Carrington (on view at the Hepworth Wakefield’s Forbidden Territories), to the astral fantasies of Remedios Varo (included in the Centre Pompidou’s blockbuster Surrealism show.)
Yet, just as often, such images illustrate the gap between the dreamer’s experience and their later interpretation of it. Some of the most popular surrealist pictures – Dalí’s melting clocks, say, or Magritte’s apple-headed businessman – are not remotely dreamlike. Looking at such easy-to-read canvases is like having a dream explained, and that’s not at all the same thing.
The chief characteristic of dreams is that they don’t surprise or shock or alienate the person who’s dreaming – the dreamer, on the contrary, feels that their dream is inevitable. “The mind of the man who dreams,” Breton writes, “is fully satisfied by what happens to him. The agonizing question of possibility is no longer pertinent. Kill, fly faster, love to your heart’s content… Let yourself be carried along, events will not tolerate your interference. You are nameless. The ease of everything is priceless.”

Most physiologists and psychologists of the early 20th century would have agreed with him, right up until his last sentence. While the surrealists looked to dreams to reveal a mind beyond conciousness, scientists of the day considered them insignificant, because you can’t experiment on a dreamer, and you can’t repeat a dream.

Since then, others have joined the battle over the meaning – or lack of meaning – of our dreams. In 1977, Harvard psychiatrists John Allan Hobson and Robert McCarley proposed “random activation theory” ‘activation-synthesis theory’, in a rebuff to the psychoanalysts and their claim that dreams had meanings only accessible via (surprise, surprise) psychonalysis. Less an explanation, more an expression of exasperation, their theory held that certain parts of our brains concoct crazy fictions out of the random neural firings of the sleeping pons (a part of the brainstem).

It is not a bad theory. It might go some way to explaining the kind of hypnagogic imagery we experience when we doze, and that so delighted the surrealists. It might even bring us closer to actually reconstructing our dreams. For instance, we can capture the brain activity of a sleeper, using functional magnetic resonance imaging, hand that data to artificial intelligence software that’s been trained on about a million images, and the system will take a stab at what the dreamer is seeing in their dream. The Japanese neuroscientist Yukiyasu Kamitani made quite a name for himself when he tried this in 2012.

Six years later, at the Serpentine Gallery in London, artist Pierre Huyghe integrated some of this material into his show UUmwelt — and what an astonishing show it was, its wall screens full of bottles becoming elephants becoming screaming pigs becoming geese, skyscrapers, mixer taps, dogs, moles, bat’s wings…

But modelling an idea doesn’t make it true. Activation-synthesis theory has inspired some fantastic art, but it fails to explain one of the most important physiological characteristics of dreaming – the fact that dreams paralyse the dreamer.

***

Brains have an alarming tendency to treat dreams as absolutely real and to respond appropriately — to jump and punch when the dream says jump! and punch! Dreams, for the dreamer, can be very dangerous indeed.

The simplest evolutionary way to mitigate the risk of injury would have been to stop the dreamer from dreaming. Instead, we evolved a complex mechanism to paralyse ourselves while in the throes of our night-time adventures. 520 million years of brain evolution say that dreams are important and need protecting.

This, rather than the actual content of dreams, has driven research into the sleeping brain. We know now that dreaming involves many more brain areas, including the parietal lobes (involved in the representation of space) and the frontal lobes (responsible for decision-making, problem-solving, self-control, attention, speech production, language comprehension – oh, and working memory). Mice dream. Dogs dream. Platypuses, beluga whales and ostriches dream; so do penguins, chameleons, iguanas and cuttlefish.[

We’re not sure about turtles. Octopuses? Marine biologist David Scheel caught his snoozing pet octopus Heidi on camera, and through careful interpretation of her dramatic colour-shifts he came to the ingenious conclusion that she was enjoying an imaginary crab supper. The clip, from PBS’s 2019 documentary Octopus: Making Contact is on YouTube.

Heidi’s brain structure is nothing like our own. Still, we’re both dreamers. Studies of wildly different sleeping brains throw up startling convergences. Dreaming is just something that brains of all sorts have to do.

We’ve recently learned why.

The first clues emerged from sleep deprivation studies conducted in the late 1960s. Both Allan Rechtschaffen and William Dement showed that sleep deprivation leads to memory deficits in rodents. A generation later, and researchers including the Brazilian neuroscientist Sidarta Ribeiro were spending the 1990s unpicking the genetic basis of memory function. Ribiero himself found the first molecular evidence of Freud’s “day residue” hypothesis, which has it that the content of our dreams is often influenced by the events, thoughts, and feelings we experience during the day.

Ribeiro had his own fairly shocking first-hand experience of the utility of dreaming. In February 1995 he arrived in New York to start at doctorate at Rockefeller University. Shortly after arriving, he woke up unable to speak English. He fell in and out of a narcoleptic trance, and then, in April, woke refreshed and energised and able to speak English better than ever before. His work can’t absolutely confirm that his dreams saved him, but he and other researchers have most certainly established the link between dreams and memory. To cut a long story very short indeed: dreams are what memories get up to when there’s no waking self to arrange them.

Well, conscious thought alone is not fast enough or reliable enough to keep us safe in the blooming, buzzing confusion of the world. We also need fast, intuitive responses to critical situations, and we rehearse these responses, continually, when we dream. Collect dream narratives from around the world, and you will quickly discover (as literary scholar Jonathan Gottschall points out in his 2012 book The Storytelling Animal) that the commonest dreams have everything to do with life and death and have very little time for anything else. You’re being chased. You’re being attacked. You’re falling. You’re drowning. You’re lost, trapped, naked, hurt…

When lives were socially simple and threats immediate, the relevance of dreams was not just apparent; it was impelling. And let’s face it: a stopped clock is right at least twice a day. Living in a relatively simple social structure, afforded only a limited palette of dream materials to draw from, was it really so surprising that (according to the historian Suetonius) Rome’s first emperor Augustus found his rise to power predicted by dreams?

Even now, Malaysia’s indigenous Orang Asli people believe that by sharing their dreams, they are passing on healing communications from their ancestors. Recently the British artist Adam Chodzko used their practice as the foundation for a now web-based project called Dreamshare Seer, which uses generative AI to visualise and animate people’s descriptions of their dreams. (Predictably, his AI outputs are rather Dali-like.)

But humanity’s mission to interpret dreams has been eroded by a revolution in our style of living. Our great-grandparents could remember a world without artificial light. Now we play on our phones until bedtime, then get up early, already focused on a day that is, when push comes to shove, more or less identical to yesterday. We neither plan our days before we sleep, nor do we interrogate our dreams when we wake. Is it any wonder, then, that our dreams are no longer able to inspire us?

Growing social complexity enriches our dream lives, but it also fragments them. Last night I dreamt of selecting desserts from a wedding buffet; later I cuddled a white chicken while negotiating for a plumbing contract. Dreams evolved to help us negotiate the big stuff. Having conquered the big stuff (humans have been apex predators for around 2 million years), it is possible that we have evolved past the point where dreaming is useful, but not past the point where dreaming is dangerous.

Here’s a film you won’t have seen. Petrov’s Flu, directed by Kirill Serebrennikov, was due for limited UK release in 2022, even as Vladimir Putin’s forces were bumbling towards Kiev.

The film opens on our hero Petrov (Semyon Serzin), riding a trolleybus home across a snowbound Yekaterinburg. He overhears a fellow passenger muttering to a neighbour that the rich in this town all deserve to be shot.

Seconds later the bus stops, Petrov is pulled off the bus and a rifle is pressed into his hands. Street executions follow, shocking him out of his febrile doze…

And Petrov’s back on the bus again.

Whatever the director’s intentions were here, I reckon this is a document for our times. You see, Andre Breton wrote his manifesto in the wreckage of a world that had turned its machine tools into weapons, the better to slaughter itself — and did all this under the flag of the Enlightenment and reason.

Today we’re manufacturing new kinds of machine tools, to serve a world that’s much more psychologically adept. Our digital devices, for example, exploit our capacity for focused attention (all too well, in many cases).

So what of those devices that exist to make our sleeping lives better, sounder, and more enjoyable?

SleepScore Labs is using electroencephalography data to analyse the content of dreams. BrainCo has a headband interface that influences dreams through auditory and visual cues. Researchers at MIT have used a sleep-tracking glove called Dormio to much the same end. iWinks’s headband increases the likelihood of lucid dreaming.

It’s hard to imagine light installations, ambient music and scented pillows ever being turned against us. Then again, we remember the world the Surrealists grew up in, laid waste by a war that had turned its ploughshares into swords. Is it so very outlandish to suggest that tomorrow, we will be weaponising our dreams?