Two hundred years of electro-foolery come good

Reading We Are Electric by Sally Adee for the Times, 28 January 2023

In an attempt to elucidate the role of electricity in biology, German polymath Alexander von Humboldt once stuck a charged wire up his bum and found that “a bright light appears before both eyes”.

Why the study of biological electricity should prove so irremediably smutty — so that serious ”electricians” (as the early researchers called themselves) steered well clear of bodies for well over a century — is a mystery science journalist Sally Adee would rather not have to re-hash, though her by-the-by account of “two hundred years of electro-foolery”, during which quacks peddled any number of cockeyed devices to treat everything from cancer to excessive masturbation, is highly entertaining.

And while this history of electricity’s role in the body begins, conventionally enough, with Volta and Galvani, with spasming frog’s legs and other fairly gruesome experiments, this is really just necessary groundwork, so that Adee can better explain recent findings that are transforming our understanding of how bodies grow and develop, heal and regenerate.

Why bodies turn out the way they do has proved a vexing puzzle for the longest while. Genetics offers no answer, as DNA contains no spatial information. There are genes for, say, eye colour, but no genes for “grow two eyes”, and no genes for “stick two eyes in front of your head”

So if genes don’t tell us the shape we should take as we grow, what does? The clue is in the title: we are, indeed, electric.

Adee explains that the forty trillion or so cells in our bodies are in constant electrical communication with each other. This chatter generates a field that dictates the form we take. For every structure in the body there is a specific membrane voltage range, and our cells specialise to perform different functions in line with the electrical cues they pick up from their neighbours. Which is (by way of arresting illustration) how in 2011 a grad student by the name of Sherry Aw managed, by manipulating electrical fields, to grow eyes on a developing frog’s belly.

The wonder is that this news will come as such a shock to so many readers (including, I dare say, many jobbing scientists). That our cells communicate electrically with each other without the mediation of nerves, and that the nervous system is only one of at least two (and probably many more) electrical communications systems — all this will come as a disconcerting surprise to many. Did you know you only have to put skin, bone, blood, nerve — indeed, any biological cell — into a petri dish and apply an electric field, and you will find all the cells will crawl to the same end of the dish? It’s taken decades before anyone thought to unpick the enormous implications of that fact.

Now we have begun to understand the importance of electrical fields in biology, we can begin to manipulate them. We’ve begun to restore some function after severe spinal injury (in humans) regrown whole limbs (in mice), and even turned cancerous tumours back into healthy tissue (in petri dishes).

Has bio-electricity — once the precinct of quacks and contrarians — at last come into its own? Has it matured? Has it grown up?

Well, yes and no. Adee would like to deliver a clear, single message about bioelectricity, but the field itself is still massively divided. On the one hand there are ground-breaking researches being conducted into development, regeneration and healing. On the other, there are those who think electricity in the body is mostly to do with nerves and brains, and their project — to hack peoples’ minds through their central nervous systems and usher in some sort of psychoelectric utopia — shows no sign of faltering.

In the 1960s the American neurophysiologist Warren McCulloch worked on the assumption that the way neurons fire is a kind of biological binary code. this led to a new school of thought, called cybernetics — a science of communications and automatic control systems, both living and mechanical. The idea was we should be able to drive an animal like a robot by simply activating specific circuits, an idea “so compelling” says Adee, “there wasn’t much point bothering with whether it was based in fact.”

Very many other researchers Adee writes about are just as wedded to the idea of the body as a meat machine.

This book arose from an article Adee wrote for the magazine New Scientist about her experiences playing DARWARS Ambush!, a military training simulation conducted in a Californian defence lab that (maybe) amped up her response times and (maybe) increased her focus — all by means of a headset that magnetically tickled precise regions in her brain.

Within days of the article’s publication in early 2012, Adee had become a sort of Joan of Arc figure for the online posthumanist community, and even turns up in Noah Yuval Harai’s book, where she serves as an Awful Warning about men becoming gods.

Adee finally admits that she would “love to take this whole idea of the body as an inferior meat puppet to be augmented with metal and absolutely launch it into the sun.” Coming clean at last, she admits she is much more interested in the basic research going on into the communications within and between individual cells — a field where the more we know, the more we realise just how much we don’t understand.

Adee’s enthusiasm is infectious, and she conveys well the jaw-dropping scale and complexity of this newly discovered “electrome”. This is more than medicine. “The real excitement of the field,” she writes, “hews closer to the excitement around cosmology.”

A Gigeresque melange

Reading Cold People by Tom Robb-Smith for the Times, 14 January 2023

Harvard medical student Liza is on holiday in Lisbon with her parents and younger sister when gigantic alien fish-shapes descend from the sky and order all humans to vacate the habitable bits of their planet for Antarctica, the only continent humans have never been able to settle.

Twenty years on, in a ramshackle, endlessly retrofitted settlement on the Antarctic Peninsula called Hope Town, Liza — one of very few survivors — gives birth to Echo, a genetically engineered daughter whose modifications allow her to withstand the bitter cold. Echo is an early prototype of future human being designed in McMurdo City (the ramshackle, ice-bound, over-serious new capital of humanity) by the heroically unprincipled geneticist Song Fu, aided and abetted by her assistant Yotam Penzak, the book’s splendidly drawn antagonist. (The author of Child 44 knows how to tell a story; you know you’re in safe hands when your villain is motivated by love.)

Yotam, who attended her birth, thinks Echo and her posthuman kind are a worthy end in themselves: powerful and humane, capable of nurturing unengineered humanity in their impossible new environment, even as they succeed them over evolutionary time.

His boss disagrees. The remains of humanity will die out in not much more than a century, says Song Fu. A more radical succession is required if humans are to survive in any form.

Yotam’s unlucky love life leaves him vulnerable to browbeating by his boss, and then to seduction by Song Fu’s posthumous final creation, a Giger-esque melange of human, alligator and shark.

In this wasteland, “Eitan” and his kind are by far the dominant species — or will be, if Yotam lets them out of their cave.
Much as Roald Amundsen and his party consumed the husky dogs that had got him to the Pole in 1911, they will consume their human creators, not out of hate or revenge, but simply because they have no other use for them.

Can Yotam’s convictions be shaken? Can Eitan be stopped?

Cold People does not explore ideas; it animates them. Plot is king. Smith’s characters aren’t so much pretend people as they are admirable, animated types. The result is a page-turner that, without offering much by way of ordinary human feeling, reveals Tom Rob Smith’s view of the human condition: what he thinks about the plight of thinking, would-be ethical beings who still need to consume and burn and exploit in order to survive. In Smith’s vision, humanity’s reach so far exceeds its grasp that its downfall at its own hand seems more or less assured.

These are chewy and worthwhile themes, and Cold People cleverly distils them to the point where they play out, and reach a satisfying climax, at ordinary human scale. If Echo can protect her human family, there’s hope for humanity at large. If not, we’re all for the chum bucket.

Cold People will entertain and impress readers who enjoy novels that are containers for ideas. The rest of us may regret that Smith did not linger longer among the Polynesian navigators, seal hunters and stir-crazy researchers populating his largely irrelevant but wonderfully evocative prologue. Slow down, Smith! You were so set on your destination, you missed the scenery.

Pulling a Steerpike

Reading Michael Moorcock’s The Citadel of Forgotten Myths for the Times (who spiked it)

Leaving his defeated rival Yyrkoon on the throne as regent, Elric, titular emperor of decayed, decadent, dragon-blooded Melniboné “pulls a Steerpike” (as Mervyn Peake, Elric’s chief influence, might say) and goes wandering off the edge of the Earth (literally) in search of Answers (no time: don’t ask).

Sword and sorcery began, not with sui generis Tolkein, but with the the Elric canon. This sort-of-prequel tucks itself neatly away between the very first Elric tales. It’s a delight.

There are three stories. Two of them were published in the late 2000s. In the first, Elric gets caught up in a family dispute. (“You are my sister’s son. Your sentient acid blood demands you help me!” exclaims Elric’s dragon-scaled aunt.) In the second, he battles the noibuluscus, a bone-chomping, gut-sucking succulent tended by dwarfish cannibals. The call-backs in the last, longest story (an original, and a worthy addition to the evergreen “man-into-bee” subgenre) binds with its companions to create what the genre calls a “fix-up”. Who needs proper novels when you can have this much fun?

Moorcock began the saga of Elric of Melniboné in 1961, largely to support New Worlds, the science fiction magazine that, over a single cash-strapped four-year span, introduced us to J G Ballard, Pamela Zoline, John Sladek, M John Harrison — oh, too many to mention.

The first thing to say about Elric — pale loiterer, kin-slayer, absentee emperor of Melniboné — is that he makes no physical or psychological sense whatsoever. One moment he’s chewing the furniture, the next he’s sprawled across a chaise longue. If a scene demands that he be vulpine, hear him howl! If an emotional outpouring is required, feel the floodgates tremble! Decency? No problem. Indecency? Have at it. Elric is his saga, as surely as Gilgamesh and Ulysses are theirs, not because these people are meticulously rendered but because they aren’t. Elric is not heroic or anti-heroic. He is simply whatever his story needs them to be in that moment.

Considered as beings that occupy a span of time, such protean protagonists are impossibly shallow. But that’s to misread them. Like pre-school children, they each occupy their eternal present, radically committed to an ever-shifting now. Elric, a curse to his friends and a bane to his lovers (supposedly), vampirically dependent upon his ravenous soul-hungry sword Stormbringer (when convenient) and constitutionally unable to bring happiness to the world (really?), is never properly melancholic. He can be as solemn as an owl, but his adventures are a hoot. But when he weeps (which is often, and never for long) it’s with a rare and captivating intensity.

To write quickly — and Moorcock has always been a fast worker — the language has to get under the reader’s skin (and the heightened diction on display here is uncut cocaine). Repetition is your friend (so long as it’s the right repetition; Stormbringer’s muffled grumblings are as welcome as that cowbell riff in “Don’t Fear the Reaper”). Stock characters add the illusion of texture (and Elric’s sidekick Moonglum, surprisingly accomplished for a Sancho Panza stand-in, is one of the genre’s best). Above all, turn everyone’s appetite up to eleven (for food, for wine, for cheer, for sex).

That some if not all human appetites have become culturally “problematic” is hardly Elric’s fault. He is like one of those incorrigible elder relatives whose arrival has the politically correct neighbours clutching their pearls. He needs to be given things to do that are slightly beneath him, just so he doesn’t let slip anything untoward. Quick, somebody: give him a giant plant to battle (in Book Two), or a big blue bee (in Book Three)!

Moorcock is too canny an operator to have let the years tarnish his most lucrative creation, and these days he keeps poor Elric locked out of the ladies’ bedrooms. The effect is not so much to make Elric grow up as to infantilise him. This is a very minor matter, but it’s what you get for creating so long-lived a character. The world will grind them down.

Perhaps Moorcock still writes Elric at speed. It’s just as likely that he’s learned, from long practice, how to simulate the effect. This increasingly rare technique is not one that garners much critical approval, let alone appreciation. Our current ability to revise texts electronically ad nauseam places a premium on an author’s nuance and erudition, insight and (God help us) wisdom. Even a friendly critic finds little to say about a book’s grip and speed and visceral impact, though these will always be the biggest drivers of sales.

Now that even James Bond has succumbed to nuance and insight, Elric may, by my reckoning, be the last towering 1960s kaiju left alive.

The past in light materials

Reading Georgi Gospodinov’s Time Shelter for The Times, 30 April 2022 

Bulgaria’s best known contemporary novelist gets into a tremendous historical tangle in Time Shelter, the tale of how a fictional Georgi Gospodinov (let’s call him GG) helps create the world’s first “clinic for the past”. Here, past ages (1980s Soviet Sofia, for example) are recreated to relieve an elderly clientele from the symptoms of senile dementia.

The bald premise here isn’t as fanciful as it might sound. I assume that while writing, Gospodinov was all over news stories about the Alexa nursing home in Dresden, which in 2017 recreated spaces from communist-era East Germany as a form of therapy.

From this shred of clinical fact, GG’s mind, like Stephen Leacock’s Lord Ronald, rides off in all directions.

GG’s boss at the clinic is his lugubrious time-jumping alter-ego Gaustine (who’s cropped up before, most memorably in Gospodinov’s 2011 novel The Physics of Sorrow and in an eponymous story in his 2007 collection And Other Stories). Gaustine hires GG to run the clinic; GG’s own father becomes a client.

Soon, carers and hangers-on are hankering to stay at the clinic, and Gaustine dreams up grand plans indeed — to build time clinics in every town; to build whole towns set in the past; ultimately, to induce whole nations to reenact their favourite historical eras! “The more a society forgets,” Gaustine observes, “the more someone produces, sells, and fills the freed-up niches with ersatz-memory… The past made from light materials, plastic memory as if spit out by a 3-D printer.”

This is a book about memory: how it fades, and how it is restored, even reinvented, in the imaginations of addled individuals, and in the civic discourse of fractious states.

As the clinic’s grandest schemes bear fruit, there’s political satire of the slapstick kind, as when “one day the president of a Central European country went to work in the national costume. Leather boots, tight pants, an embroidered vest, a small black bow above a white shirt, and a black bowler hat with a red geranium.” The scene in which a three-square-kilometre Bulgarian flag is dropped over the crowds in Sofia’s oldest park, the Borisova Gradina, is a fine piece of comic invention.

As the dream of European unity frays, and each European country embraces what it imagines (and votes) to be its best self, Gospodinov’s notes on national character and historical determinism threaten to swallow the book. But in a development that the reader will welcome (though it’s bad news all the way for GG) our narrator flees time-torn Bulgaria (torn between complacent Soviet nerds and keen reenactors of an unsuccessful national uprising in 1876), finds himself a cheap cell in a Franciscan monastery outside Zurich, and comes face to face with his own burgeoning dementia. “The great leaving is upon you,” GG announces, sliding from first person into second, from second into third, as his mind comes apart.

Gospodinov chillingly describes the process of mental ageing: “Long, lonely manoeuvres, waiting, more like trench warfare, lying in wait, hiding out, quick sorties, prowling the battlefield ‘between the clock and the bed,’ as one of the elderly Munch’s final self-portraits is called.”

Of course, this passage would have been ten times more chilling without that artistic reference tacked on the end. So what, exactly, is Gospodinov trying to do?

His story is strong enough — the tale of an innocent caught up in a compelling aquaintance’s hare-brained scheme. But Gospodinov is one of those writers who thinks novels can, and perhaps should, contain more than just a story. Notes, for example. Political observations. Passages of philosophy. Diary entries. Quotations.

GG comes back again and again to Thomas Mann’s polyphonic novel The Magic Mountain, but he could just as easily have cited Robert Musil, or James Joyce, or indeed Milan Kundera, whose mash-ups of story, essay and memoir (sometimes mashed even further by poor translation) bowled readers over in the 1980s.

Can novels really hold so much? Gospodinov risks a mischievous line or two about what a really brave, true, “inconsolable” novel would look like: “one in which all stories, the happened and the unhappened, float around us in the primordial chaos, shouting and whispering, begging and sniggering, meeting and passing one another by in the darkness.”

Not like a novel at all, then.

The risk with a project like this is that it slips fiction’s tracks and becomes nothing more than an overlong London Review of Books article, a boutique window displaying Gospodinov’s cultural capital: “Ooh! Look! Edvard Munch! And over there — Primo Levi!” A trove for quotation-hunters.

Happily for the book — not at all happily for Europe — Vladimir Putin’s rape of Ukraine has saved Time Shelter from this hostile reading. In its garish light, Gospodinov’s fanciful and rambling meditation on midlife crisis, crumbling memory and historical reenactment is proving psychologically astute and shockingly prescient.

Gospodinov’s Europe — complacent, sentimental and underconfident — is pretty much exactly the Europe Putin imagines he’s gone to war with. Motley, cacophonous, and speciously postmodern, it’s also the false future from which — and with a terribly urgency — we know we must awake.

 

Don’t stick your butter-knife in the toaster

Reading The End of Astronauts by Donald Goldsmith and Martin Rees for the Times, 26 March 2002

NASA’s Space Launch System, the most powerful rocket ever built, is now sitting on the launch pad. It’s the super heavy lifting body for Artemis, NASA’s international programme to establish a settlement on the Moon. The Artemis consortium includes everyone with an interest in space, from the UK to the UAE to Ukraine, but there are a few significant exceptions: India, Russia, and China. Russia and China already run a joint project to place their own base on the Moon.

Any fool can see where this is going. The conflict, when it comes, will arise over control of the moon’s south pole, where permanently sunlit pinnacles provide ideal locations for solar collectors. These will power the extraction of ice from permanently night-filled craters nearby. And the ice? That will be used for rocket fuel.

The closer we get to putting humans in space, the more familiar the picture of our future becomes. You can get depressed about that hard-scrabble, piratical future, or exhilarated by it, but you surely can’t be surprised by it.

What makes this part of the human story different is not the exotic locations. It’s the fact that wherever we want to go, our machines will have to go there first. (In this sense, it’s the *lack* of strangeness and glamour that will distinguish our space-borne future — our lives spent inside a chain of radiation-hardened Amazon fulfilment centres.)

So why go at all? The argument for “boots on the ground” is more strategic than scientific. Consider the achievements of NASA’s still-young Perseverance lander, lowered to the surface of Mars at the end of 2018, and with it a lightweight proof-of-concept helicopter called Ingenuity. Through these machines, researchers around the world are already combing our neighbour planet for signs of past and present life.

What more can we do? Specifically, what (beyond dying, and most likely in horrible, drawn-out ways) can astronauts do that space robots cannot? And if robots do need time to develop valuable “human” skills — the ability to spot geographical anomalies, for instance (though this is a bad example, because machines are getting good at this already) — doesn’t it make sense to hold off on that human mission, and give the robots a chance to catch up?

The argument to put humans into space is as old as NASA’s missions to the moon, and to this day it is driven by many of that era’s assumptions.

One was the belief (or at any rate the hope) that we might make the whole business cheap and easy by using nuclear-powered launch vehicles within the Earth’s atmosphere. Alas, radiological studies nipped that brave scheme in the bud.

Other Apollo-era assumptions have a longer shelf-life but are, at heart, more stupid. Dumbest of all is the notion — first dreamt up by Nikolai Fyodorov, a late-nineteenth century Russian librarian — that exploring outer space is the next stage in our species’ evolution. This stirring blandishment isn’t challenged nearly as often as it ought to be, and it collapses under the most cursory anthropological or historical interrogation.

That the authors of this minatory little volume — the UK’s Astronomer Royal and an award-winning space sciences communicator —
beat Fedorov’s ideas to death with sticks is welcome, to a degree. “The desire to explore is not our destiny,” they point out, “nor in our DNA, nor innate in human cultures.”

The trouble begins when the poor disenchanted reader asks, somewhat querulously, Then why bother with outer space at all?

Their blood lust yet unslaked, our heroes take a firmer grip their cudgels. No, the moon is not “rich” in helium 3, harvesting it would be a nightmare, and the technology we’d need so we can use it for nuclear fusion remains hypothetical. No, we are never going to be able to flit from planet to planet at will. Journey times to the outer planets are always going to be measured in years. Very few asteroids are going to be worth mining, and the risks of doing so probably outweigh the benefits. And no, we are not going to terraform Mars, the strongest argument against it being “the fact that we are doing a poor job of terraforming Earth.” In all these cases it’s not the technology that’s against us, so much as the mathematics — the sheer scale.

For anyone seriously interested in space exploration, this slaughter of the impractical innocents is actually quite welcome. Actual space sciences have for years been struggling to breathe in an atmosphere saturated with hype and science fiction. The superannuated blarney spouted by Messrs Musk and Bezos (who basically just want to get into the mining business) isn’t helping.

But for the rest of us, who just want to see some cool shit — will no crumb of romantic comfort be left to us?

In the long run, our destiny may very well lie in outer space — but not until and unless our machines overtake us. Given the harshness and scale of the world beyond Earth, there is very little that humans can do there for themselves. More likely, we will one day be carried to the stars as pets by vast, sentimental machine intelligences. This was the vision behind the Culture novels of the late great Iain Banks. And there — so long as they got over the idea they were the most important things in the universe — humans did rather well for themselves.

Rees and Goldsmith, not being science fiction writers, can only tip their hat to such notions. But spacefaring futures that do not involve other powers and intelligences are beginning to look decidedly gimcrack. Take, for example, the vast rotating space colonies dreamt up by physicist Gerard O’Neill in the 1970s. They’re designed so 20th-century vintage humans can survive among the stars. And this, as the authors show, makes such environments impossibly expensive, not to mention absurdly elaborate and unstable.

The conditions of outer space are not, after all, something to be got around with technology. To survive in any numbers, for any length of time, humans will have to adapt, biologically and psychologically, beyond their current form.

The authors concede that for now, this is a truth best explored in science fiction. Here, they write about immediate realities, and the likely the role of humans in space up to about 2040.

The big problem with outer space is time. Space exploration is a species of pot-watching. Find a launch window. Plot your course. Wait. The journey to Mars is a seven-month curve covering more than ten times the distance between Mars and Earth at their closest conjunction — and the journey can only be made once every twenty-six months.

Gadding about the solar system isn’t an option, because it would require fuel your spacecraft hasn’t got. Fuel is great for hauling things and people out of Earth’s gravity well. In space, though, it becomes bulky, heavy and expensive.

This is why mission planners organise their flights so meticulously, years in advance, and rely on geometry, gravity, time and patience to see their plans fulfilled. “The energy required to send a laboratory toward Mars,” the authors explain, “is almost enough to carry it to an asteroid more than twice as far away. While the trip to the asteroid may well take more than twice as long, this hardly matters for… inanimate matter.”

This last point is the clincher. Machines are much less sensitive to time than we are. They do not age as we do. They do not need feeding and watering in the same way. And they are much more difficult to fry. Though capable of limited self-repair, humans are ill-suited to the rigours of space exploration, and perform poorly when asked to sit on their hands for years on end.

No wonder, then, that automated missions to explore the solar system have been NASA’s staple since the 1970s, while astronauts have been restricted to maintenance roles in low earth orbit. Even here they’re arguably more trouble than they’re worth. The Hubble Space Telescope was repaired and refitted by astronauts five times during its 40-year lifetime — but at a total cost that would have paid for seven replacement telescopes.

Reading The End of Astronauts is like being told by an elderly parent, again and again, not to stick your butter-knife in the toaster. You had no intention of sticking your knife in the toaster. You know perfectly well not to stick your knife in the toaster. They only have to open their mouths, though, and you’re stabbing the toaster to death.

How to prevent the future

Reading Gerd Gigerenzer’s How to Stay Smart in a Smart World for the Times, 26 February 2022

Some writers are like Moses. They see further than everybody else, have a clear sense of direction, and are natural leaders besides. These geniuses write books that show us, clearly and simply, what to do if we want to make a better world.

Then there are books like this one — more likeable, and more honest — in which the author stumbles upon a bottomless hole, sees his society approaching it, and spends 250-odd pages scampering about the edge of the hole yelling at the top of his lungs — though he knows, and we know, that society is a machine without brakes, and all this shouting comes far, far too late.

Gerd Gigerenzer is a German psychologist who has spent his career studying how the human mind comprehends and assesses risk. We wouldn’t have lasted even this long as a species if we didn’t negotiate day-to-day risks with elegance and efficiency. We know, too, that evolution will have forced us formulate the quickest, cheapest, most economical strategies for solving our problems. We call these strategies “heuristics”.

Heuristics are rules of thumb, developed by extemporising upon past experiences. They rely on our apprehension of, and constant engagement in, the world beyond our heads. We can write down these strategies; share them; even formalise them in a few lines of light-weight computer code.

Here’s an example from Gigerenzer’s own work: Is there more than one person in that speeding vehicle? Is it slowing down as ordered? Is the occupant posing any additional threat?

Abiding by the rules of engagement set by this tiny decision tree reduces civilian casualties at military checkpoints by more than sixty per cent.

We can apply heuristics to every circumstance we are likely to encounter, regardless of the amount of data available. The complex algorithms that power machine learning, on the other hand, “work best in well-defined, stable situations where large amounts of data are available”.

What happens if we decide to hurl 200,000 years of heuristics down the toilet, and kneel instead at the altar of occult computation and incomprehensibly big data?

Nothing good, says Gigerenzer.

How to Stay Smart is a number of books in one, none of which, on its own, is entirely satisfactory.

It is a digital detox manual, telling us how our social media are currently weaponised, designed to erode our cognition (but we can fill whole shelves with such books).

It punctures many a rhetorical bubble around much-vaunted “artificial intelligence”, pointing out how easy it is to, say, get a young man of colour charged without bail using proprietary risk-assessment software. (In some notorious cases the software had been trained on, and so was liable to perpetuate, historical injustices.) Or would you prefer to force an autonomous car to crash by wearing a certain kind of T-shirt? (Simple, easily generated pixel patterns cause whole classes of networks to draw bizarre inferential errors about the movement of surrounding objects.) This is enlightening stuff, or it would be, were the stories not quite so old.

One very valuable section explains why forecasts derived from large data sets become less reliable, the more data they are given. In the real world, problems are unbounded; the amount of data relevant to any problem is infinite. This is why past information is a poor guide to future performance, and why the future always wins. Filling a system with even more data about what used to happen will only bake in the false assumptions that are already in your system. Gigerenzer goes on to show how vested interests hide this awkward fact behind some highly specious definitions of what a forecast is.

But the most impassioned and successful of these books-within-a-book is the one that exposes the hunger for autocratic power, the political naivety, and the commercial chicanery that lie behind the rise of “AI”. (Healthcare AI is a particular bugbear: the story of how the Dutch Cancer Society was suckered into funding big data research, at the expense of cancer prevention campaigns that were shown to work, is especially upsetting).

Threaded through this diverse material is an argument Gigerenzer maybe should have made at the beginning: that we are entering a new patriarchal age, in which we are obliged to defer, neither to spiritual authority, nor to the glitter of wealth, but to unliving, unconscious, unconscionable systems that direct human action by aping human wisdom just well enough to convince us, but not nearly well enough to deliver happiness or social justice.

Gigerenzer does his best to educate and energise us against this future. He explains the historical accidents that led us to muddle cognition with computation in the first place. He tells us what actually goes on, computationally speaking, behind the chromed wall of machine-learning blarney. He explains why, no matter how often we swipe right, we never get a decent date; he explains how to spot fake news; and he suggests how we might claw our minds free of our mobile phones.

But it’s a hopeless effort, and the book’s most powerful passages explain exactly why it is hopeless.

“To improve the performance of AI,” Gigerenzer explains, “one needs to make the physical environment more stable and people’s behaviour more predictable.”

In China, the surveillance this entails comes wrapped in Confucian motley: under its social credit score system, sincerity, harmony and wealth creation trump free speech. In the West the self-same system, stripped of any ethic, is well advanced thanks to the efforts of the credit-scoring industry. One company, Acxiom, claims to have collected data from 700 million people worldwide, and up to 3000 data points for each individual (and quite a few are wrong).

That this bumper data harvest is an encouragement to autocratic governance hardly needs rehearsing, or so you would think.

And yet, in a 2021 study of 3,446 digital natives, 96 per cent “do not know how to check the trustworthiness of sites and posts.” I think Gigerenzer is pulling his punches here. What if, as seems more likely, 96 per cent of digital natives can’t be bothered to check the trustworthiness of sites and posts?

Asked by the author in a 2019 study how much they would be willing to spend each month on ad-free social media — that is, social media not weaponised against the user — 75 per cent of respondents said they would not pay a cent.

Have we become so trivial, selfish, short-sighted and penny-pinching that we deserve our coming subjection? Have we always been servile at heart, for all our talk of rights and freedoms; desperate for some grown-up come tug at our leash, and bring us to heal?

You may very well think so. Gigerenzer could not possibly comment. He does, though, remark that operant conditioning (the kind of learning explored in the 1940s by behaviourist B F Skinner, that occurs through rewards and punishments) has never enjoyed such political currency, and that “Skinner’s dream of a society where the behaviour of each member is strictly controlled by reward has become reality.”

How to Stay Smart in a Smart World is an optimistic title indeed for a book that maps, with passion and precision, a hole down which we are already plummeting.

Where the law of preposterousness trumps all

Reading Pieter Waterdrinker’s The Long Song of Tchaikovsky Street: A Russian Adventure for the Times, 29 January 2022

On 16 June 1936 the author and Bolshevik sympathiser Andre Gide left France for 9-week trip to the Soviet Union. In Soviet Russia, he was offered every comfort — an experience he found extremely unsettling. “Are these really the men who made the Revolution?” he asked, in his book Afterthoughts. “No; they are the men who profit by it. They may be members of the Party — there is nothing communist in their hearts.”

Parisian intellectuals immediately piled in on this turncoat, this viper: Romain Rolland called Gide’s reporting “astonishingly poor, superficial, puerile and contradictory”.

It is possible to misread The Long Song, Pieter Waterdrinker’s memoir of Russia and its revolutions, in the same way, and lay the same charges at his door. How do you write about a place like St Petersburg (where “although the law of chance may be predominant as a rule, the law of preposterousness trumps all”), how do you anatomise the superficiality, puerility and contradiction of Russian civic culture, without exhibiting the same qualities yourself? How do you explore a sewer without getting covered in….? Well, you get the idea.

Waterdrinker is a novelist best known for the farcical and exuberant The German Wedding (2009). Poubelle, published in 2016, is a dizzying state-of-nations novel rooted in the war in east Ukraine. Waterdrinker’s gift for savage comedy, and his war correspondent’s eye, have few contemporary equivalents. Reading Paul Evans’s impressively brutal translation of The Long Song, I was put in mind, not of any contemporary, but of Wyndham Lewis, a between-the-wars writer so contrarian and violent and hilarious, English letters have spent the 60-odd years since his death trying to bury him.

Waterdrinker complains that he’s been receiving similar mistreatment from the cognoscenti in his native Netherlands. And let’s be frank: there’s nothing more inconvenient, nothing more irritating, than a leftist who calls out socialism.

Be that as it may, The Long Song has already sold over 100,000 copies across mainland Europe. After twenty-odd years of trying, Waterdrinker is an overnight success.

What is this book, exactly? A synthesis of Waterdrinker’s irascible personality and colourful career? A non-fiction novel? A deconstructed political memoir?

Pieter Waterdrinker, who calls a spade a bloody shovel, calls it “… a personal book about the Russian Revolution of 1917. You buffed up your own life with a little patina, borrowed an abundance of what others had written, with liberal citations, made up a bit if need be, and mixed it all together like the ingredients of a thick, hearty soup, et voilá: it was as if the book had written itself.”

Waterdrinker interleaves his early biography (sucked into, and unceremoniously spat out of, the goldrush accompanying the collapse of the Soviet Union in the 1990s) with the history of revolutions in Russia. He concentrates particularly on the people (including Valdimir Lenin, “the bastard that started it all”) who either resided or worked on Tchaikovsky Street (named after the revolutionary, not the composer) where Waterdrinker and his wife Julia and their three cats once lived.

“One of our neighbours… was standing on the landing in a blue-and-white striped sailor’s top, hacking up an antique sideboard with an axe,” the author reminisces. “‘No, not Mama’s dresser…’ the man imitated his wife’s voice out of key. ‘But why not, you slut!’ The axe-head fell again, the splinters and brass fittings flying every which way.”

And this, bear in mind, is the couple’s isle of calm; the place from which Waterdrinker looks back on his early life, before he became a writer. It’s a tale dominated by a series of increasingly dubious business dealings, starting in 1988 with a scheme to smuggle bibles into Leningrad and ending in 1990 when he was strongly urged to transport a container of French wine to Kazan each month “in exchange for an unlimited supply of tender Tatar beauties to work as dancers in the Amsterdam nightlife circuit”. After a spell in the Netherlands, the couple returned to Russia in 1996.

There are moments of sybaritic delight, as when the young would-be writer bathes with his wife-to-be (a teacher who has lived in poverty and squalor for years) in a bathtub of Soviet champagne. There are moments of horror, as when the author’s business associates are hung from trees to freeze to death; or are, more straightforwardly, shot. There are unforgettable grotesques: the half-mad elderly Madam Pokrovskaya, who has eluded the tragedy of a life spent in St Petersburg by entirely abandoning her sense of time; young Waterdrinker’s grinning business partner Swindleman, so hollow, he rattles. In the end (but not so soon as to spoil the book) a sort of tinnitus sets in. The apartment on Tchaikovsky Street is itself lost to redevelopers in the end, and the book ends in clouds of plaster dust and the thudding of drills.

The Long Song draws parallels between the revolution of 1917 and the collapse of the Soviet Union in 1991. It is, to Waterdrinker’s mind, the same revolution, which is to say, the same orgy of resentment, hate and nihilism, fomented by psychopaths, and barely contained — drab decades at a time — by self-serving bureaucrats and secret policemen.

Waterdrinker sees a continuum of moral annihilation stretching from the czars to the present. He concludes that Russian political culture runs on hatred, and its revolutions are, far from being attempts at treatment, merely symptoms of an ineradicable malaise. Waterdrinker prefers witness over analysis, because he’s a sometime war correspondent, and eye-witness is his metier; and anyway, how are you supposed to “analyse” moments like the one recorded by Philip Jordan, a Missouri-born African-American and assistant to the US ambassador, when “in a house not far from the embassy, [the Red Guards] murdered a little girl, twelve bayonets stuck into her body”? The Long Song’s abiding emblem is a description, not of the taking of the Winter Palace, but of the taking of the Winter Palace’s wine cellar, some eight months later: “scenes of tableaux worthy of Dante, in which men up to their ankles in wine shot at each other, the blood of the dead and the wounded mixing with the alcohol.”

The Long Song contributes to a tradition that’s recognised for its literary merit (think Bunin, think Zamyatin) but which tends to get saddled with the “contrarian” label — not least because much of the Left establishment still pays lip-service to the Bolshevik idea. (Consider how Orwell was treated by his contemporaries — or Christopher Hitchens, for that matter.)

Waterdrinker is too much the literary werewolf to change many made-up minds. But, given Russia’s current expansionist posturings, we’d be well to give him audience. Listen, if not to him, then to the dairist who once shared his street, Zinaida Hippius, who watched this horrorshow the first time around: “If a country can exist in Europe in the twentieth century where there’s such phenomenal and previously unwitnessed slavery, and Europe doesn’t understand that or else accepts it, then Europe must meet its downfall.”

 

How to live an extra life

Reading Sidarta Ribeiro’s The Oracle of Night: The History and Science of Dreams for the Times, 2 January 2022

Early in January 1995 Sidarta Ribeiro, a Brazilian student of neuroscience, arrived in New York City to study for his doctorate at Rockefeller University. He rushed enthusiastically into his first meeting — only to discover he could not understand a word people were saying. He had, in that minute, completely forgotten the English language.

It did not return. He would turn up for work, struggle to make sense of what was going on, and wake up, hours later, on his supervisor’s couch. The colder and snowier the season became, the more impossible life got until, “when February came around, in the deep silence of the snow, I gave in completely and was swallowed up into the world of Morpheus.”

Ribeiro struggled into lectures so he didn’t get kicked out; otherwise he spent the entire winter in bed, sleeping; dozing; above all, dreaming.

April brought a sudden and extraordinary recovery. Ribeiro woke up understanding English again, and found he could speak it more fluently than ever before. He befriended colleagues easily, drove research, and, in time, announced the first molecular evidence of Freud’s “day residue” hypothesis, in which dreams exist to process memories of the previous day.

Ribeiro’s rich dream life that winter convinced him that it was the dreams themselves — and not just the napping — that had wrought a cognitive transformation in him. Yet dreams, it turned out, had fallen almost entirely off the scientific radar.

The last dream researcher to enter public consciousness was probably Sigmund Freud. Freud at least seemed to draw coherent meaning from dreams — dreams that had been focused to a fine point by fin de siecle Vienna’s intense milieu of sexual repression.

But Freud’s “royal road to the unconscious” has been eroded since by a revolution in our style of living. Our great-grandparents could remember a world without artificial light. Now we play on our phones until bedtime, then get up early, already focused on a day that is, when push comes to shove, more or less identical to yesterday. We neither plan our days before we sleep, nor do we interrogate our dreams when we wake. It it any wonder, then, that our dreams are no longer able to inspire us? When US philosopher Owen Flanagan says that “dreams are the spandrels of sleep”, he speaks for almost all of us.

Ribeiro’s distillation of his life’s work offers a fascinating corrective to this reductionist view. His experiments have made Freudian dream analysis and other elements of psychoanalytic theory definitively testable for the first time — and the results are astonishing. There is material evidence, now, for the connection Freud made between dreaming and desire: both involve the selective release of the brain chemical dopamine.

The middle chapters of The Oracle of Night focus on the neuroscience, capturing, with rare candour, all the frustrations, controversies, alliances, ambiguities and accidents that make up a working scientists’ life.

To study dreams, Ribeiro explains, is to study memories: how they are received in the hippocampus, then migrate out through surrounding cortical tissue, “burrowing further and further in as life goes on, ever more extensive and resistant to disturbances”. This is why some memories can survive, even for more than a hundred years, in a brain radically altered by the years.

Ribeiro is an excellent communicator of detail, and this is important, given the size and significance of his claims. “At their best,” he writes, “dreams are the actual source of our future. The unconscious is the sum of all our memories and of all their possible combinations. It comprises, therefore, much more than what we have been — it comprises all that we can be.”

To make such a large statement stick, Ribeiro is going to need more than laboratory evidence, and so his scientific account is generously bookended with well-evidenced anthropological and archaeological speculation. Dinosaurs enjoyed REM sleep, apparently — a delightfully fiendish piece of deduction. And was the Bronze Age Collapse, around 1200 BC, triggered by a qualitative shift how we interpreted dreams?

These are sizeable bread slices around an already generous Christmas-lunch sandwich. On page 114, when Ribeiro declares that “determining a point of departure for sleep requires that we go back 4.5 billion years and imagine the conditions in which the first self-replicating molecules appeared,” the poor reader’s heart may quail and their courage falter.

A more serious obstacle — and one quite out of Ribeiro’s control — is that friend (we all have one) who, feet up on the couch and both hands wrapped around the tea, baffs on about what their dreams are telling them. How do you talk about a phenomenon that’s become the sinecure of people one would happily emigrate to avoid?

And yet, by taking dreams seriously, Bibeiro must also talk seriously about shamanism, oracles, prediction and mysticism. This is only reasonable, if you think about it: dreams were the source of shamanism (one of humanity’s first social specialisations), and shamanism in its turn gave us medicine, philosophy and religion.

When lives were socially simple and threats immediate, the relevance of dreams was not just apparent; it was impelling. Even a stopped watch is correct twice a day. With a limited palette of dream materials to draw from, was it really so surprising that Rome’s first emperor Augustus found his rise to power predicted by dreams — at least according to his biographer Suetonius? “By simulating objects of desire and aversion,” Ribeiro argues, “the dream occasionally came to represent what would in fact happen”.

Growing social complexity enriches dream life, but it also fragments it (which may explain all those complaints that the gods have fallen silent, which we find in texts dated between 1200 to 800 BC). The dreams typical of our time, says Ribeiro, are “a blend of meanings, a kaleidoscope of wants, fragmented by the multiplicity of desires of our age”.

The trouble with a book of this size and scale is that the reader, feeling somewhat punch-drunk, can’t help but wish that two or three better books had been spun from the same material. Why naps are good for us, why sleep improves our creativity, how we handle grief — these are instrumentalist concerns that might, under separate covers, have greatly entertained us. In the end, though, I reckon Ribeiro made the right choice. Such books give us narrow, discrete glimpses into the power of dreams, but leave us ignorant of their real nature. Ribeiro’s brick of a book shatters our complacency entirely, and for good.

Dreaming is a kind of thinking. Treating dreams as spandrels — as so much psychic “junk code” — is not only culturally illiterate — it runs against everything current science is telling us. You are a dreaming animal, says Ribeiro, for whom “dreams are like stars: they are always there, but we can only see them at night”.

Keep a dream diary, Ribeiro insists. So I did. And as I write this, a fortnight on, I am living an extra life.

“A moist and feminine sucking”

Reading Susan Wedlich’s Slime: A natural history for the Times, 6 November 2021

For over two thousand years, says science writer Susan Wedlich, quoting German historian Richard Hennig, maritime history has been haunted by mention of a “congealed sea”. Ships, it is said, have been caught fast and even foundered in waters turned to slime.

Slime stalks the febrile dreams of landlubbers, too: Jean-Paul Sartre succumbed to its “soft, yielding action, a moist and feminine sucking”, in a passage, lovingly quoted here, that had this reader instinctively scrabbling for the detergent.

We’ve learned to fear slime, in a way that would have seemed quite alien to the farmers of ancient Egypt, who supposed slime and mud were the base materials of life itself. So, funnily enough, did German zoologist Ernst Haeckel, a champion of Charles Darwin, who saw primordial potential in the gellid lumps being trawled from the sea floor by various oceanographic expeditions. (This turned out to be calcium sulphate, precipitated by the chemical reaction between deep-sea mud and alcohol used for the preservation of aquatic specimens. Haeckel never quite got over his disappointment.)

For Susan Wedlich, it is not enough that we should learn about slime; nor even that we should be entertained by it (though we jolly well are). Wendlich wants us to care deeply about slime, and musters all the rhetorical at her disposal to achieve her goal. “Does even the word “slime” have to elicit gagging histrionics?” she exclaims, berating us for our phobia: “if we neither recognize nor truly know slime, how are we supposed to appreciate it or use it for our own ends?”

This is overdone. Nor do we necessarily know enough about slime to start shouting about it. To take one example, using slime to read our ecological future turns out to be a vexed business. There’s a scum of nutrients held together by slime floating on top of the oceans. A fraction of a millimetre thick, it’s called the “sea-surface micro-layer”. Global warming might be thinning it, or thickening it, and doing either might be increasing the chemical transport taking place between air and ocean — or retarding it — to unknown effect. So there: yet another thing to worry about.

For sure, slime holds the world together. Slimes, rather: there are any number of ways to stiffen water so that it acts as a lubricant, a glue, or a barrier. Whatever its origins, it is most conspicuous when it disappears — as when overtilling of America’s Great Plains caused the Dust Bowl in 1933, or when the gluey glycan coating of one’s blood vessels starts to mysteriously shear away during surgery.

There was a moment, in the 1920s, when slime shed its icky materiality and became almost cool. Artists both borrowed from and inspired Haeckel’s exquisite drawings of delicate maritime invertebrates. And biologists, looking for the mechanisms underpinning memory and heredity, would have liked nothing more than to find that the newly-identified protoplasm within our every cell was recording, like an Edison drum, the tremblings of a ubiquitous, information-rich aether. (Sounds crazy now, but the era was, after all, bathing in X-rays and other newly-discovered radiations.)

But slime’s moment of modishness passed. Now it’s the unlovely poster-child of environmental degradation: the stuff that will fill our soon-to-be-empty oceans, “home only to jellyfish, algae and microbial mats”, if we don’t do something sharpish to change our ecological ways.

Hand in hand with such millennial anxieties, of course, come the usual power fantasies: that we might harness all this unlovely slime — nothing more than water held in a cage of a few long-chain polymers — to transform our world, providing the base for new materials and soft robots, “transparent, stretchable, locomotive, biocompatible, remote-controlled, weavable, wearable, self-healing and shape-morphing, 3D-printed or improved by different ingredients”.

Wedlich’s enthusiasm is by no means misplaced. Slime is not just a largely untapped wonder material. It is also — really, truly — the source of life, and a key enabler of complex forms. We used to think the machinery of the first cells must have risen in clay hydrogels — a rather complicated and unlikely genesis — but it turns out that nucleic acids like DNA and RNA can sometimes form slimes on their own. Life, it turns out, does not need a substrate on which to arise. It is its own sticky home.

Slime’s effective barrier to pathogens may then have enabled complex tissues to differentiate and develop, slickly sequestered from a disease-ridden outside world. Wedlich’s tour of the human gut, and its multiple slime layers, (some lubricant, some gluey, and many armed with extraordinary electrostatic and molecular traps for one pathogen or another) is a tour de force of clear and gripping explanation.

Slime being, in essence, nothing more than stiffened water, there are more ways to make it than the poor reader could ever bare to hear about. So Wedlich very sensibly approaches her subject from the other direction, introducing slimes through their uses. Snails combine gluey and lubricating slimes to travel over dry ground one moment, cling to the underside of a leaf the next. Hagfish deter predators by jellifying the waters around them, shooting polymers from their skin like so many thousands of microscopic harpoons. Some squid, when threatened, add slime to their ink to create pseudomorphs — fake squidoids that hold together just long enough to distract a predator. Some squid pump out whole legions of such doppelgangers.

Wedlich’s own strategy, in writing Slime, is not dissimilar. She’s deliberately elusive. The reader never really feels they’ve got hold of the matter of her book; rather, they’re being provoked into punching through layer after dizzying layer, through masterpieces of fin de siecle glass-blowing into theories about the spontaneous generation of life, through the lifecycles of carnivorous plants into the tactics of Japanese balloon-bomb designers in the second world war, until, dizzy and gasping, they reach the end of Wedlich’s extraordinary mystery tour, not with a handle on slime exactly, but with an elemental and exultant new vision of what life may be: that which arises when the boundaries of earth, air and water are stirred in sunlight’s fire. It’s a vision that, for all its weight of well-marshalled modern detail, is one Aristotle would have recognised.

Life dies at the end

Reading Henry Gee’s A (Very) Short History of Life on Earth for the Times, 23 October 2021

The story of life on Earth is around 4.6 billion years long. We’re here to witness the most interesting bit (of course we are; our presence makes it interesting) and once we’re gone (wiped out in an eyeblink, or maybe, just maybe, speciated out of all recognition) the story will run on, and run down, for about another billion years, before the Sun incinerates the Earth.

It’s an epic story, and like most epic stories, it cries out for a good editor. In Henry Gee, a British palaeontologist and senior editor of the scientific journal Nature, it has found one. But Gee has his work cut out. The story doesn’t really get going until the end. The first two thirds are about slime. And once there are living things worth looking at, they keep keeling over. All the interesting species burn up and vanish like candles lit at both ends. Humans (the only animal we know of that’s even aware that this story exists) will last no time at all. And the five extinction events this planet has so far undergone might make you seriously wonder why life bothered in the first place.

We are told, for example, how two magma plumes in the late Permian killed this story just as it got going, wiping out nineteen of every species in the sea, and one out of every ten on land. It would take humans another 500 years of doing exactly what they’ve been doing since the Industrial Revolution to cause anything like that kind of damage.

A word about this: we have form in wiping things out and then regretting their loss (mammoths, dodos, passenger pigeons). And we really must stop mucking about with the chemistry of the air. But we’re not planet-killers. “It is not the Sixth Extinction,” Henry Gee reassures us. “At least, not yet.”

It’s perhaps a little bit belittling to cast Gee’s achievement here as mere “editing”. Gee’s a marvellously engaging writer, juggling humour, precision, polemic and poetry to enrich his impossibly telescoped account. His description of the lycopod forests that are the source of nearly all our coal — and whose trees grew only to reproduce, exploding into a crown of spore-bearing branches — brings to mind a battlefield of the First World War, a “craterscape of hollow stumps, filled with a refuse of water and death… rising from a mire of decay.” A little later a Lystrosaurus (a distant ancestor of mammals, and the most successful land animal ever) is sketched as having “the body of a pig, the uncompromising attitude toward food of a golden retriever, and the head of an electric can opener”.

Gee’s book is full of such dazzling walk-on parts, but most impressive are the elegant numbers he traces across evolutionary time. Here’s one: dinosaurs, unlike mammals, evolved a highly efficient one-way system for breathing that involved passing spent air through sacs distributed inside their bodies. They were air-cooled, which meant they could get very big without cooking themselves. They were lighter than they looked, literally full of hot air, and these advantages — lightweight structure, fast-running metabolism, air cooling — made their evolution into birds possible.

Here’s another tale: the make-up of our teeth — enamel over dentine over bone — is the same as you’d find in the armoured skin of the earliest fishes.

To braid such interconnected wonders into a book the size of a modest novel is essentially an exercise in precis, and a bravura demonstration of the editor’s art. Though the book (whose virtue is its brevity) is not illustrated, there six timelines to guide us through the scalar shifts necessary to comprehend the staggering longueurs involved in bringing a planet to life. Life was entirely stationary and mostly slimy until only about 600 million years ago. Just ten million years ago, grasses evolved, and with them, grazing animals and their predators, some of whom, the primates, were on their way to making us. The earliest Sapiens appeared just over half a million years ago. Only when sea levels fell, around 120,000 years ago, did Sapiens get to migrate around the planet.

As one reads Gee’s “(very) short history”, one feels time slowing down and growing more granular. This deceleration gives Gee the space he needs to depict the burgeoning complexity of life as it spreads and evolves. It’s a scalar game that’s reminiscent of Charles and Ray Eames’s 1967 films *Powers of Ten*, which depicted the relative scale of the Universe by zooming in (through the atom) and out (through the cosmos) at logarithmic speed. It’s a dizzying and exhilarating technique which, for all that, makes clear sense out of very complex narratives.

Eventually — and long after we are gone — life will retreat beneath the earth as the swelling sun makes conditions on the planet’s surface impossible. The distinctions between things will fall away as life, struggling to live, becomes colossal, colonial and homogenous. Imagine vast subterranean figs, populated by evolved, worm-like insects…

Then, your mind reeling, try and work out what on earth people mean when they say that humans have conquered and/or despoiled the planet.

Our planet deserves our care, for sure, because we have to live here. But the planet has yet to register our existence, and probably never will. We are, Gee explains, just two and a half million years into a series of ice ages that will last for tens of millions of years more. Our species’ story extends not much beyond one of these hundreds of cycles. The human-induced injection of carbon dioxide “will set back the date of the next glacial advance” — and that is all. 250 million years hence, any future prospectors (and they won’t be human), armed with equipment “of the most refined sensitivity”, might — just might — be able to detect that, a short way through the Cenozoic Ice Age, *something happened*, “but they might be unable to say precisely what.”

It takes a long time to bring complex life to a planet, and complex life, once it runs out of wriggle room, collapses in an instant. Humans already labour under a considerable “extinction debt” since they have made their habitat (“nothing less than the entire Earth”) progressively less habitable. Most everything that ever went extinct fell into the same trap. What makes our case tragic is that we’re conscious of what we’ve done; we’re trying to do something about it; and we know that, in the long run, it will never be enough.

Gee’s final masterstroke as editor is to make human sense, and real tragedy, from his unwieldy story’s glaring spoiler: that Life dies at the end.