The past in light materials

Reading Georgi Gospodinov’s Time Shelter for The Times, 30 April 2022 

Bulgaria’s best known contemporary novelist gets into a tremendous historical tangle in Time Shelter, the tale of how a fictional Georgi Gospodinov (let’s call him GG) helps create the world’s first “clinic for the past”. Here, past ages (1980s Soviet Sofia, for example) are recreated to relieve an elderly clientele from the symptoms of senile dementia.

The bald premise here isn’t as fanciful as it might sound. I assume that while writing, Gospodinov was all over news stories about the Alexa nursing home in Dresden, which in 2017 recreated spaces from communist-era East Germany as a form of therapy.

From this shred of clinical fact, GG’s mind, like Stephen Leacock’s Lord Ronald, rides off in all directions.

GG’s boss at the clinic is his lugubrious time-jumping alter-ego Gaustine (who’s cropped up before, most memorably in Gospodinov’s 2011 novel The Physics of Sorrow and in an eponymous story in his 2007 collection And Other Stories). Gaustine hires GG to run the clinic; GG’s own father becomes a client.

Soon, carers and hangers-on are hankering to stay at the clinic, and Gaustine dreams up grand plans indeed — to build time clinics in every town; to build whole towns set in the past; ultimately, to induce whole nations to reenact their favourite historical eras! “The more a society forgets,” Gaustine observes, “the more someone produces, sells, and fills the freed-up niches with ersatz-memory… The past made from light materials, plastic memory as if spit out by a 3-D printer.”

This is a book about memory: how it fades, and how it is restored, even reinvented, in the imaginations of addled individuals, and in the civic discourse of fractious states.

As the clinic’s grandest schemes bear fruit, there’s political satire of the slapstick kind, as when “one day the president of a Central European country went to work in the national costume. Leather boots, tight pants, an embroidered vest, a small black bow above a white shirt, and a black bowler hat with a red geranium.” The scene in which a three-square-kilometre Bulgarian flag is dropped over the crowds in Sofia’s oldest park, the Borisova Gradina, is a fine piece of comic invention.

As the dream of European unity frays, and each European country embraces what it imagines (and votes) to be its best self, Gospodinov’s notes on national character and historical determinism threaten to swallow the book. But in a development that the reader will welcome (though it’s bad news all the way for GG) our narrator flees time-torn Bulgaria (torn between complacent Soviet nerds and keen reenactors of an unsuccessful national uprising in 1876), finds himself a cheap cell in a Franciscan monastery outside Zurich, and comes face to face with his own burgeoning dementia. “The great leaving is upon you,” GG announces, sliding from first person into second, from second into third, as his mind comes apart.

Gospodinov chillingly describes the process of mental ageing: “Long, lonely manoeuvres, waiting, more like trench warfare, lying in wait, hiding out, quick sorties, prowling the battlefield ‘between the clock and the bed,’ as one of the elderly Munch’s final self-portraits is called.”

Of course, this passage would have been ten times more chilling without that artistic reference tacked on the end. So what, exactly, is Gospodinov trying to do?

His story is strong enough — the tale of an innocent caught up in a compelling aquaintance’s hare-brained scheme. But Gospodinov is one of those writers who thinks novels can, and perhaps should, contain more than just a story. Notes, for example. Political observations. Passages of philosophy. Diary entries. Quotations.

GG comes back again and again to Thomas Mann’s polyphonic novel The Magic Mountain, but he could just as easily have cited Robert Musil, or James Joyce, or indeed Milan Kundera, whose mash-ups of story, essay and memoir (sometimes mashed even further by poor translation) bowled readers over in the 1980s.

Can novels really hold so much? Gospodinov risks a mischievous line or two about what a really brave, true, “inconsolable” novel would look like: “one in which all stories, the happened and the unhappened, float around us in the primordial chaos, shouting and whispering, begging and sniggering, meeting and passing one another by in the darkness.”

Not like a novel at all, then.

The risk with a project like this is that it slips fiction’s tracks and becomes nothing more than an overlong London Review of Books article, a boutique window displaying Gospodinov’s cultural capital: “Ooh! Look! Edvard Munch! And over there — Primo Levi!” A trove for quotation-hunters.

Happily for the book — not at all happily for Europe — Vladimir Putin’s rape of Ukraine has saved Time Shelter from this hostile reading. In its garish light, Gospodinov’s fanciful and rambling meditation on midlife crisis, crumbling memory and historical reenactment is proving psychologically astute and shockingly prescient.

Gospodinov’s Europe — complacent, sentimental and underconfident — is pretty much exactly the Europe Putin imagines he’s gone to war with. Motley, cacophonous, and speciously postmodern, it’s also the false future from which — and with a terribly urgency — we know we must awake.

 

Don’t stick your butter-knife in the toaster

Reading The End of Astronauts by Donald Goldsmith and Martin Rees for the Times, 26 March 2002

NASA’s Space Launch System, the most powerful rocket ever built, is now sitting on the launch pad. It’s the super heavy lifting body for Artemis, NASA’s international programme to establish a settlement on the Moon. The Artemis consortium includes everyone with an interest in space, from the UK to the UAE to Ukraine, but there are a few significant exceptions: India, Russia, and China. Russia and China already run a joint project to place their own base on the Moon.

Any fool can see where this is going. The conflict, when it comes, will arise over control of the moon’s south pole, where permanently sunlit pinnacles provide ideal locations for solar collectors. These will power the extraction of ice from permanently night-filled craters nearby. And the ice? That will be used for rocket fuel.

The closer we get to putting humans in space, the more familiar the picture of our future becomes. You can get depressed about that hard-scrabble, piratical future, or exhilarated by it, but you surely can’t be surprised by it.

What makes this part of the human story different is not the exotic locations. It’s the fact that wherever we want to go, our machines will have to go there first. (In this sense, it’s the *lack* of strangeness and glamour that will distinguish our space-borne future — our lives spent inside a chain of radiation-hardened Amazon fulfilment centres.)

So why go at all? The argument for “boots on the ground” is more strategic than scientific. Consider the achievements of NASA’s still-young Perseverance lander, lowered to the surface of Mars at the end of 2018, and with it a lightweight proof-of-concept helicopter called Ingenuity. Through these machines, researchers around the world are already combing our neighbour planet for signs of past and present life.

What more can we do? Specifically, what (beyond dying, and most likely in horrible, drawn-out ways) can astronauts do that space robots cannot? And if robots do need time to develop valuable “human” skills — the ability to spot geographical anomalies, for instance (though this is a bad example, because machines are getting good at this already) — doesn’t it make sense to hold off on that human mission, and give the robots a chance to catch up?

The argument to put humans into space is as old as NASA’s missions to the moon, and to this day it is driven by many of that era’s assumptions.

One was the belief (or at any rate the hope) that we might make the whole business cheap and easy by using nuclear-powered launch vehicles within the Earth’s atmosphere. Alas, radiological studies nipped that brave scheme in the bud.

Other Apollo-era assumptions have a longer shelf-life but are, at heart, more stupid. Dumbest of all is the notion — first dreamt up by Nikolai Fyodorov, a late-nineteenth century Russian librarian — that exploring outer space is the next stage in our species’ evolution. This stirring blandishment isn’t challenged nearly as often as it ought to be, and it collapses under the most cursory anthropological or historical interrogation.

That the authors of this minatory little volume — the UK’s Astronomer Royal and an award-winning space sciences communicator —
beat Fedorov’s ideas to death with sticks is welcome, to a degree. “The desire to explore is not our destiny,” they point out, “nor in our DNA, nor innate in human cultures.”

The trouble begins when the poor disenchanted reader asks, somewhat querulously, Then why bother with outer space at all?

Their blood lust yet unslaked, our heroes take a firmer grip their cudgels. No, the moon is not “rich” in helium 3, harvesting it would be a nightmare, and the technology we’d need so we can use it for nuclear fusion remains hypothetical. No, we are never going to be able to flit from planet to planet at will. Journey times to the outer planets are always going to be measured in years. Very few asteroids are going to be worth mining, and the risks of doing so probably outweigh the benefits. And no, we are not going to terraform Mars, the strongest argument against it being “the fact that we are doing a poor job of terraforming Earth.” In all these cases it’s not the technology that’s against us, so much as the mathematics — the sheer scale.

For anyone seriously interested in space exploration, this slaughter of the impractical innocents is actually quite welcome. Actual space sciences have for years been struggling to breathe in an atmosphere saturated with hype and science fiction. The superannuated blarney spouted by Messrs Musk and Bezos (who basically just want to get into the mining business) isn’t helping.

But for the rest of us, who just want to see some cool shit — will no crumb of romantic comfort be left to us?

In the long run, our destiny may very well lie in outer space — but not until and unless our machines overtake us. Given the harshness and scale of the world beyond Earth, there is very little that humans can do there for themselves. More likely, we will one day be carried to the stars as pets by vast, sentimental machine intelligences. This was the vision behind the Culture novels of the late great Iain Banks. And there — so long as they got over the idea they were the most important things in the universe — humans did rather well for themselves.

Rees and Goldsmith, not being science fiction writers, can only tip their hat to such notions. But spacefaring futures that do not involve other powers and intelligences are beginning to look decidedly gimcrack. Take, for example, the vast rotating space colonies dreamt up by physicist Gerard O’Neill in the 1970s. They’re designed so 20th-century vintage humans can survive among the stars. And this, as the authors show, makes such environments impossibly expensive, not to mention absurdly elaborate and unstable.

The conditions of outer space are not, after all, something to be got around with technology. To survive in any numbers, for any length of time, humans will have to adapt, biologically and psychologically, beyond their current form.

The authors concede that for now, this is a truth best explored in science fiction. Here, they write about immediate realities, and the likely the role of humans in space up to about 2040.

The big problem with outer space is time. Space exploration is a species of pot-watching. Find a launch window. Plot your course. Wait. The journey to Mars is a seven-month curve covering more than ten times the distance between Mars and Earth at their closest conjunction — and the journey can only be made once every twenty-six months.

Gadding about the solar system isn’t an option, because it would require fuel your spacecraft hasn’t got. Fuel is great for hauling things and people out of Earth’s gravity well. In space, though, it becomes bulky, heavy and expensive.

This is why mission planners organise their flights so meticulously, years in advance, and rely on geometry, gravity, time and patience to see their plans fulfilled. “The energy required to send a laboratory toward Mars,” the authors explain, “is almost enough to carry it to an asteroid more than twice as far away. While the trip to the asteroid may well take more than twice as long, this hardly matters for… inanimate matter.”

This last point is the clincher. Machines are much less sensitive to time than we are. They do not age as we do. They do not need feeding and watering in the same way. And they are much more difficult to fry. Though capable of limited self-repair, humans are ill-suited to the rigours of space exploration, and perform poorly when asked to sit on their hands for years on end.

No wonder, then, that automated missions to explore the solar system have been NASA’s staple since the 1970s, while astronauts have been restricted to maintenance roles in low earth orbit. Even here they’re arguably more trouble than they’re worth. The Hubble Space Telescope was repaired and refitted by astronauts five times during its 40-year lifetime — but at a total cost that would have paid for seven replacement telescopes.

Reading The End of Astronauts is like being told by an elderly parent, again and again, not to stick your butter-knife in the toaster. You had no intention of sticking your knife in the toaster. You know perfectly well not to stick your knife in the toaster. They only have to open their mouths, though, and you’re stabbing the toaster to death.

How to prevent the future

Reading Gerd Gigerenzer’s How to Stay Smart in a Smart World for the Times, 26 February 2022

Some writers are like Moses. They see further than everybody else, have a clear sense of direction, and are natural leaders besides. These geniuses write books that show us, clearly and simply, what to do if we want to make a better world.

Then there are books like this one — more likeable, and more honest — in which the author stumbles upon a bottomless hole, sees his society approaching it, and spends 250-odd pages scampering about the edge of the hole yelling at the top of his lungs — though he knows, and we know, that society is a machine without brakes, and all this shouting comes far, far too late.

Gerd Gigerenzer is a German psychologist who has spent his career studying how the human mind comprehends and assesses risk. We wouldn’t have lasted even this long as a species if we didn’t negotiate day-to-day risks with elegance and efficiency. We know, too, that evolution will have forced us formulate the quickest, cheapest, most economical strategies for solving our problems. We call these strategies “heuristics”.

Heuristics are rules of thumb, developed by extemporising upon past experiences. They rely on our apprehension of, and constant engagement in, the world beyond our heads. We can write down these strategies; share them; even formalise them in a few lines of light-weight computer code.

Here’s an example from Gigerenzer’s own work: Is there more than one person in that speeding vehicle? Is it slowing down as ordered? Is the occupant posing any additional threat?

Abiding by the rules of engagement set by this tiny decision tree reduces civilian casualties at military checkpoints by more than sixty per cent.

We can apply heuristics to every circumstance we are likely to encounter, regardless of the amount of data available. The complex algorithms that power machine learning, on the other hand, “work best in well-defined, stable situations where large amounts of data are available”.

What happens if we decide to hurl 200,000 years of heuristics down the toilet, and kneel instead at the altar of occult computation and incomprehensibly big data?

Nothing good, says Gigerenzer.

How to Stay Smart is a number of books in one, none of which, on its own, is entirely satisfactory.

It is a digital detox manual, telling us how our social media are currently weaponised, designed to erode our cognition (but we can fill whole shelves with such books).

It punctures many a rhetorical bubble around much-vaunted “artificial intelligence”, pointing out how easy it is to, say, get a young man of colour charged without bail using proprietary risk-assessment software. (In some notorious cases the software had been trained on, and so was liable to perpetuate, historical injustices.) Or would you prefer to force an autonomous car to crash by wearing a certain kind of T-shirt? (Simple, easily generated pixel patterns cause whole classes of networks to draw bizarre inferential errors about the movement of surrounding objects.) This is enlightening stuff, or it would be, were the stories not quite so old.

One very valuable section explains why forecasts derived from large data sets become less reliable, the more data they are given. In the real world, problems are unbounded; the amount of data relevant to any problem is infinite. This is why past information is a poor guide to future performance, and why the future always wins. Filling a system with even more data about what used to happen will only bake in the false assumptions that are already in your system. Gigerenzer goes on to show how vested interests hide this awkward fact behind some highly specious definitions of what a forecast is.

But the most impassioned and successful of these books-within-a-book is the one that exposes the hunger for autocratic power, the political naivety, and the commercial chicanery that lie behind the rise of “AI”. (Healthcare AI is a particular bugbear: the story of how the Dutch Cancer Society was suckered into funding big data research, at the expense of cancer prevention campaigns that were shown to work, is especially upsetting).

Threaded through this diverse material is an argument Gigerenzer maybe should have made at the beginning: that we are entering a new patriarchal age, in which we are obliged to defer, neither to spiritual authority, nor to the glitter of wealth, but to unliving, unconscious, unconscionable systems that direct human action by aping human wisdom just well enough to convince us, but not nearly well enough to deliver happiness or social justice.

Gigerenzer does his best to educate and energise us against this future. He explains the historical accidents that led us to muddle cognition with computation in the first place. He tells us what actually goes on, computationally speaking, behind the chromed wall of machine-learning blarney. He explains why, no matter how often we swipe right, we never get a decent date; he explains how to spot fake news; and he suggests how we might claw our minds free of our mobile phones.

But it’s a hopeless effort, and the book’s most powerful passages explain exactly why it is hopeless.

“To improve the performance of AI,” Gigerenzer explains, “one needs to make the physical environment more stable and people’s behaviour more predictable.”

In China, the surveillance this entails comes wrapped in Confucian motley: under its social credit score system, sincerity, harmony and wealth creation trump free speech. In the West the self-same system, stripped of any ethic, is well advanced thanks to the efforts of the credit-scoring industry. One company, Acxiom, claims to have collected data from 700 million people worldwide, and up to 3000 data points for each individual (and quite a few are wrong).

That this bumper data harvest is an encouragement to autocratic governance hardly needs rehearsing, or so you would think.

And yet, in a 2021 study of 3,446 digital natives, 96 per cent “do not know how to check the trustworthiness of sites and posts.” I think Gigerenzer is pulling his punches here. What if, as seems more likely, 96 per cent of digital natives can’t be bothered to check the trustworthiness of sites and posts?

Asked by the author in a 2019 study how much they would be willing to spend each month on ad-free social media — that is, social media not weaponised against the user — 75 per cent of respondents said they would not pay a cent.

Have we become so trivial, selfish, short-sighted and penny-pinching that we deserve our coming subjection? Have we always been servile at heart, for all our talk of rights and freedoms; desperate for some grown-up come tug at our leash, and bring us to heal?

You may very well think so. Gigerenzer could not possibly comment. He does, though, remark that operant conditioning (the kind of learning explored in the 1940s by behaviourist B F Skinner, that occurs through rewards and punishments) has never enjoyed such political currency, and that “Skinner’s dream of a society where the behaviour of each member is strictly controlled by reward has become reality.”

How to Stay Smart in a Smart World is an optimistic title indeed for a book that maps, with passion and precision, a hole down which we are already plummeting.

Where the law of preposterousness trumps all

Reading Pieter Waterdrinker’s The Long Song of Tchaikovsky Street: A Russian Adventure for the Times, 29 January 2022

On 16 June 1936 the author and Bolshevik sympathiser Andre Gide left France for 9-week trip to the Soviet Union. In Soviet Russia, he was offered every comfort — an experience he found extremely unsettling. “Are these really the men who made the Revolution?” he asked, in his book Afterthoughts. “No; they are the men who profit by it. They may be members of the Party — there is nothing communist in their hearts.”

Parisian intellectuals immediately piled in on this turncoat, this viper: Romain Rolland called Gide’s reporting “astonishingly poor, superficial, puerile and contradictory”.

It is possible to misread The Long Song, Pieter Waterdrinker’s memoir of Russia and its revolutions, in the same way, and lay the same charges at his door. How do you write about a place like St Petersburg (where “although the law of chance may be predominant as a rule, the law of preposterousness trumps all”), how do you anatomise the superficiality, puerility and contradiction of Russian civic culture, without exhibiting the same qualities yourself? How do you explore a sewer without getting covered in….? Well, you get the idea.

Waterdrinker is a novelist best known for the farcical and exuberant The German Wedding (2009). Poubelle, published in 2016, is a dizzying state-of-nations novel rooted in the war in east Ukraine. Waterdrinker’s gift for savage comedy, and his war correspondent’s eye, have few contemporary equivalents. Reading Paul Evans’s impressively brutal translation of The Long Song, I was put in mind, not of any contemporary, but of Wyndham Lewis, a between-the-wars writer so contrarian and violent and hilarious, English letters have spent the 60-odd years since his death trying to bury him.

Waterdrinker complains that he’s been receiving similar mistreatment from the cognoscenti in his native Netherlands. And let’s be frank: there’s nothing more inconvenient, nothing more irritating, than a leftist who calls out socialism.

Be that as it may, The Long Song has already sold over 100,000 copies across mainland Europe. After twenty-odd years of trying, Waterdrinker is an overnight success.

What is this book, exactly? A synthesis of Waterdrinker’s irascible personality and colourful career? A non-fiction novel? A deconstructed political memoir?

Pieter Waterdrinker, who calls a spade a bloody shovel, calls it “… a personal book about the Russian Revolution of 1917. You buffed up your own life with a little patina, borrowed an abundance of what others had written, with liberal citations, made up a bit if need be, and mixed it all together like the ingredients of a thick, hearty soup, et voilá: it was as if the book had written itself.”

Waterdrinker interleaves his early biography (sucked into, and unceremoniously spat out of, the goldrush accompanying the collapse of the Soviet Union in the 1990s) with the history of revolutions in Russia. He concentrates particularly on the people (including Valdimir Lenin, “the bastard that started it all”) who either resided or worked on Tchaikovsky Street (named after the revolutionary, not the composer) where Waterdrinker and his wife Julia and their three cats once lived.

“One of our neighbours… was standing on the landing in a blue-and-white striped sailor’s top, hacking up an antique sideboard with an axe,” the author reminisces. “‘No, not Mama’s dresser…’ the man imitated his wife’s voice out of key. ‘But why not, you slut!’ The axe-head fell again, the splinters and brass fittings flying every which way.”

And this, bear in mind, is the couple’s isle of calm; the place from which Waterdrinker looks back on his early life, before he became a writer. It’s a tale dominated by a series of increasingly dubious business dealings, starting in 1988 with a scheme to smuggle bibles into Leningrad and ending in 1990 when he was strongly urged to transport a container of French wine to Kazan each month “in exchange for an unlimited supply of tender Tatar beauties to work as dancers in the Amsterdam nightlife circuit”. After a spell in the Netherlands, the couple returned to Russia in 1996.

There are moments of sybaritic delight, as when the young would-be writer bathes with his wife-to-be (a teacher who has lived in poverty and squalor for years) in a bathtub of Soviet champagne. There are moments of horror, as when the author’s business associates are hung from trees to freeze to death; or are, more straightforwardly, shot. There are unforgettable grotesques: the half-mad elderly Madam Pokrovskaya, who has eluded the tragedy of a life spent in St Petersburg by entirely abandoning her sense of time; young Waterdrinker’s grinning business partner Swindleman, so hollow, he rattles. In the end (but not so soon as to spoil the book) a sort of tinnitus sets in. The apartment on Tchaikovsky Street is itself lost to redevelopers in the end, and the book ends in clouds of plaster dust and the thudding of drills.

The Long Song draws parallels between the revolution of 1917 and the collapse of the Soviet Union in 1991. It is, to Waterdrinker’s mind, the same revolution, which is to say, the same orgy of resentment, hate and nihilism, fomented by psychopaths, and barely contained — drab decades at a time — by self-serving bureaucrats and secret policemen.

Waterdrinker sees a continuum of moral annihilation stretching from the czars to the present. He concludes that Russian political culture runs on hatred, and its revolutions are, far from being attempts at treatment, merely symptoms of an ineradicable malaise. Waterdrinker prefers witness over analysis, because he’s a sometime war correspondent, and eye-witness is his metier; and anyway, how are you supposed to “analyse” moments like the one recorded by Philip Jordan, a Missouri-born African-American and assistant to the US ambassador, when “in a house not far from the embassy, [the Red Guards] murdered a little girl, twelve bayonets stuck into her body”? The Long Song’s abiding emblem is a description, not of the taking of the Winter Palace, but of the taking of the Winter Palace’s wine cellar, some eight months later: “scenes of tableaux worthy of Dante, in which men up to their ankles in wine shot at each other, the blood of the dead and the wounded mixing with the alcohol.”

The Long Song contributes to a tradition that’s recognised for its literary merit (think Bunin, think Zamyatin) but which tends to get saddled with the “contrarian” label — not least because much of the Left establishment still pays lip-service to the Bolshevik idea. (Consider how Orwell was treated by his contemporaries — or Christopher Hitchens, for that matter.)

Waterdrinker is too much the literary werewolf to change many made-up minds. But, given Russia’s current expansionist posturings, we’d be well to give him audience. Listen, if not to him, then to the dairist who once shared his street, Zinaida Hippius, who watched this horrorshow the first time around: “If a country can exist in Europe in the twentieth century where there’s such phenomenal and previously unwitnessed slavery, and Europe doesn’t understand that or else accepts it, then Europe must meet its downfall.”

 

How to live an extra life

Reading Sidarta Ribeiro’s The Oracle of Night: The History and Science of Dreams for the Times, 2 January 2022

Early in January 1995 Sidarta Ribeiro, a Brazilian student of neuroscience, arrived in New York City to study for his doctorate at Rockefeller University. He rushed enthusiastically into his first meeting — only to discover he could not understand a word people were saying. He had, in that minute, completely forgotten the English language.

It did not return. He would turn up for work, struggle to make sense of what was going on, and wake up, hours later, on his supervisor’s couch. The colder and snowier the season became, the more impossible life got until, “when February came around, in the deep silence of the snow, I gave in completely and was swallowed up into the world of Morpheus.”

Ribeiro struggled into lectures so he didn’t get kicked out; otherwise he spent the entire winter in bed, sleeping; dozing; above all, dreaming.

April brought a sudden and extraordinary recovery. Ribeiro woke up understanding English again, and found he could speak it more fluently than ever before. He befriended colleagues easily, drove research, and, in time, announced the first molecular evidence of Freud’s “day residue” hypothesis, in which dreams exist to process memories of the previous day.

Ribeiro’s rich dream life that winter convinced him that it was the dreams themselves — and not just the napping — that had wrought a cognitive transformation in him. Yet dreams, it turned out, had fallen almost entirely off the scientific radar.

The last dream researcher to enter public consciousness was probably Sigmund Freud. Freud at least seemed to draw coherent meaning from dreams — dreams that had been focused to a fine point by fin de siecle Vienna’s intense milieu of sexual repression.

But Freud’s “royal road to the unconscious” has been eroded since by a revolution in our style of living. Our great-grandparents could remember a world without artificial light. Now we play on our phones until bedtime, then get up early, already focused on a day that is, when push comes to shove, more or less identical to yesterday. We neither plan our days before we sleep, nor do we interrogate our dreams when we wake. It it any wonder, then, that our dreams are no longer able to inspire us? When US philosopher Owen Flanagan says that “dreams are the spandrels of sleep”, he speaks for almost all of us.

Ribeiro’s distillation of his life’s work offers a fascinating corrective to this reductionist view. His experiments have made Freudian dream analysis and other elements of psychoanalytic theory definitively testable for the first time — and the results are astonishing. There is material evidence, now, for the connection Freud made between dreaming and desire: both involve the selective release of the brain chemical dopamine.

The middle chapters of The Oracle of Night focus on the neuroscience, capturing, with rare candour, all the frustrations, controversies, alliances, ambiguities and accidents that make up a working scientists’ life.

To study dreams, Ribeiro explains, is to study memories: how they are received in the hippocampus, then migrate out through surrounding cortical tissue, “burrowing further and further in as life goes on, ever more extensive and resistant to disturbances”. This is why some memories can survive, even for more than a hundred years, in a brain radically altered by the years.

Ribeiro is an excellent communicator of detail, and this is important, given the size and significance of his claims. “At their best,” he writes, “dreams are the actual source of our future. The unconscious is the sum of all our memories and of all their possible combinations. It comprises, therefore, much more than what we have been — it comprises all that we can be.”

To make such a large statement stick, Ribeiro is going to need more than laboratory evidence, and so his scientific account is generously bookended with well-evidenced anthropological and archaeological speculation. Dinosaurs enjoyed REM sleep, apparently — a delightfully fiendish piece of deduction. And was the Bronze Age Collapse, around 1200 BC, triggered by a qualitative shift how we interpreted dreams?

These are sizeable bread slices around an already generous Christmas-lunch sandwich. On page 114, when Ribeiro declares that “determining a point of departure for sleep requires that we go back 4.5 billion years and imagine the conditions in which the first self-replicating molecules appeared,” the poor reader’s heart may quail and their courage falter.

A more serious obstacle — and one quite out of Ribeiro’s control — is that friend (we all have one) who, feet up on the couch and both hands wrapped around the tea, baffs on about what their dreams are telling them. How do you talk about a phenomenon that’s become the sinecure of people one would happily emigrate to avoid?

And yet, by taking dreams seriously, Bibeiro must also talk seriously about shamanism, oracles, prediction and mysticism. This is only reasonable, if you think about it: dreams were the source of shamanism (one of humanity’s first social specialisations), and shamanism in its turn gave us medicine, philosophy and religion.

When lives were socially simple and threats immediate, the relevance of dreams was not just apparent; it was impelling. Even a stopped watch is correct twice a day. With a limited palette of dream materials to draw from, was it really so surprising that Rome’s first emperor Augustus found his rise to power predicted by dreams — at least according to his biographer Suetonius? “By simulating objects of desire and aversion,” Ribeiro argues, “the dream occasionally came to represent what would in fact happen”.

Growing social complexity enriches dream life, but it also fragments it (which may explain all those complaints that the gods have fallen silent, which we find in texts dated between 1200 to 800 BC). The dreams typical of our time, says Ribeiro, are “a blend of meanings, a kaleidoscope of wants, fragmented by the multiplicity of desires of our age”.

The trouble with a book of this size and scale is that the reader, feeling somewhat punch-drunk, can’t help but wish that two or three better books had been spun from the same material. Why naps are good for us, why sleep improves our creativity, how we handle grief — these are instrumentalist concerns that might, under separate covers, have greatly entertained us. In the end, though, I reckon Ribeiro made the right choice. Such books give us narrow, discrete glimpses into the power of dreams, but leave us ignorant of their real nature. Ribeiro’s brick of a book shatters our complacency entirely, and for good.

Dreaming is a kind of thinking. Treating dreams as spandrels — as so much psychic “junk code” — is not only culturally illiterate — it runs against everything current science is telling us. You are a dreaming animal, says Ribeiro, for whom “dreams are like stars: they are always there, but we can only see them at night”.

Keep a dream diary, Ribeiro insists. So I did. And as I write this, a fortnight on, I am living an extra life.

“A moist and feminine sucking”

Reading Susan Wedlich’s Slime: A natural history for the Times, 6 November 2021

For over two thousand years, says science writer Susan Wedlich, quoting German historian Richard Hennig, maritime history has been haunted by mention of a “congealed sea”. Ships, it is said, have been caught fast and even foundered in waters turned to slime.

Slime stalks the febrile dreams of landlubbers, too: Jean-Paul Sartre succumbed to its “soft, yielding action, a moist and feminine sucking”, in a passage, lovingly quoted here, that had this reader instinctively scrabbling for the detergent.

We’ve learned to fear slime, in a way that would have seemed quite alien to the farmers of ancient Egypt, who supposed slime and mud were the base materials of life itself. So, funnily enough, did German zoologist Ernst Haeckel, a champion of Charles Darwin, who saw primordial potential in the gellid lumps being trawled from the sea floor by various oceanographic expeditions. (This turned out to be calcium sulphate, precipitated by the chemical reaction between deep-sea mud and alcohol used for the preservation of aquatic specimens. Haeckel never quite got over his disappointment.)

For Susan Wedlich, it is not enough that we should learn about slime; nor even that we should be entertained by it (though we jolly well are). Wendlich wants us to care deeply about slime, and musters all the rhetorical at her disposal to achieve her goal. “Does even the word “slime” have to elicit gagging histrionics?” she exclaims, berating us for our phobia: “if we neither recognize nor truly know slime, how are we supposed to appreciate it or use it for our own ends?”

This is overdone. Nor do we necessarily know enough about slime to start shouting about it. To take one example, using slime to read our ecological future turns out to be a vexed business. There’s a scum of nutrients held together by slime floating on top of the oceans. A fraction of a millimetre thick, it’s called the “sea-surface micro-layer”. Global warming might be thinning it, or thickening it, and doing either might be increasing the chemical transport taking place between air and ocean — or retarding it — to unknown effect. So there: yet another thing to worry about.

For sure, slime holds the world together. Slimes, rather: there are any number of ways to stiffen water so that it acts as a lubricant, a glue, or a barrier. Whatever its origins, it is most conspicuous when it disappears — as when overtilling of America’s Great Plains caused the Dust Bowl in 1933, or when the gluey glycan coating of one’s blood vessels starts to mysteriously shear away during surgery.

There was a moment, in the 1920s, when slime shed its icky materiality and became almost cool. Artists both borrowed from and inspired Haeckel’s exquisite drawings of delicate maritime invertebrates. And biologists, looking for the mechanisms underpinning memory and heredity, would have liked nothing more than to find that the newly-identified protoplasm within our every cell was recording, like an Edison drum, the tremblings of a ubiquitous, information-rich aether. (Sounds crazy now, but the era was, after all, bathing in X-rays and other newly-discovered radiations.)

But slime’s moment of modishness passed. Now it’s the unlovely poster-child of environmental degradation: the stuff that will fill our soon-to-be-empty oceans, “home only to jellyfish, algae and microbial mats”, if we don’t do something sharpish to change our ecological ways.

Hand in hand with such millennial anxieties, of course, come the usual power fantasies: that we might harness all this unlovely slime — nothing more than water held in a cage of a few long-chain polymers — to transform our world, providing the base for new materials and soft robots, “transparent, stretchable, locomotive, biocompatible, remote-controlled, weavable, wearable, self-healing and shape-morphing, 3D-printed or improved by different ingredients”.

Wedlich’s enthusiasm is by no means misplaced. Slime is not just a largely untapped wonder material. It is also — really, truly — the source of life, and a key enabler of complex forms. We used to think the machinery of the first cells must have risen in clay hydrogels — a rather complicated and unlikely genesis — but it turns out that nucleic acids like DNA and RNA can sometimes form slimes on their own. Life, it turns out, does not need a substrate on which to arise. It is its own sticky home.

Slime’s effective barrier to pathogens may then have enabled complex tissues to differentiate and develop, slickly sequestered from a disease-ridden outside world. Wedlich’s tour of the human gut, and its multiple slime layers, (some lubricant, some gluey, and many armed with extraordinary electrostatic and molecular traps for one pathogen or another) is a tour de force of clear and gripping explanation.

Slime being, in essence, nothing more than stiffened water, there are more ways to make it than the poor reader could ever bare to hear about. So Wedlich very sensibly approaches her subject from the other direction, introducing slimes through their uses. Snails combine gluey and lubricating slimes to travel over dry ground one moment, cling to the underside of a leaf the next. Hagfish deter predators by jellifying the waters around them, shooting polymers from their skin like so many thousands of microscopic harpoons. Some squid, when threatened, add slime to their ink to create pseudomorphs — fake squidoids that hold together just long enough to distract a predator. Some squid pump out whole legions of such doppelgangers.

Wedlich’s own strategy, in writing Slime, is not dissimilar. She’s deliberately elusive. The reader never really feels they’ve got hold of the matter of her book; rather, they’re being provoked into punching through layer after dizzying layer, through masterpieces of fin de siecle glass-blowing into theories about the spontaneous generation of life, through the lifecycles of carnivorous plants into the tactics of Japanese balloon-bomb designers in the second world war, until, dizzy and gasping, they reach the end of Wedlich’s extraordinary mystery tour, not with a handle on slime exactly, but with an elemental and exultant new vision of what life may be: that which arises when the boundaries of earth, air and water are stirred in sunlight’s fire. It’s a vision that, for all its weight of well-marshalled modern detail, is one Aristotle would have recognised.

Life dies at the end

Reading Henry Gee’s A (Very) Short History of Life on Earth for the Times, 23 October 2021

The story of life on Earth is around 4.6 billion years long. We’re here to witness the most interesting bit (of course we are; our presence makes it interesting) and once we’re gone (wiped out in an eyeblink, or maybe, just maybe, speciated out of all recognition) the story will run on, and run down, for about another billion years, before the Sun incinerates the Earth.

It’s an epic story, and like most epic stories, it cries out for a good editor. In Henry Gee, a British palaeontologist and senior editor of the scientific journal Nature, it has found one. But Gee has his work cut out. The story doesn’t really get going until the end. The first two thirds are about slime. And once there are living things worth looking at, they keep keeling over. All the interesting species burn up and vanish like candles lit at both ends. Humans (the only animal we know of that’s even aware that this story exists) will last no time at all. And the five extinction events this planet has so far undergone might make you seriously wonder why life bothered in the first place.

We are told, for example, how two magma plumes in the late Permian killed this story just as it got going, wiping out nineteen of every species in the sea, and one out of every ten on land. It would take humans another 500 years of doing exactly what they’ve been doing since the Industrial Revolution to cause anything like that kind of damage.

A word about this: we have form in wiping things out and then regretting their loss (mammoths, dodos, passenger pigeons). And we really must stop mucking about with the chemistry of the air. But we’re not planet-killers. “It is not the Sixth Extinction,” Henry Gee reassures us. “At least, not yet.”

It’s perhaps a little bit belittling to cast Gee’s achievement here as mere “editing”. Gee’s a marvellously engaging writer, juggling humour, precision, polemic and poetry to enrich his impossibly telescoped account. His description of the lycopod forests that are the source of nearly all our coal — and whose trees grew only to reproduce, exploding into a crown of spore-bearing branches — brings to mind a battlefield of the First World War, a “craterscape of hollow stumps, filled with a refuse of water and death… rising from a mire of decay.” A little later a Lystrosaurus (a distant ancestor of mammals, and the most successful land animal ever) is sketched as having “the body of a pig, the uncompromising attitude toward food of a golden retriever, and the head of an electric can opener”.

Gee’s book is full of such dazzling walk-on parts, but most impressive are the elegant numbers he traces across evolutionary time. Here’s one: dinosaurs, unlike mammals, evolved a highly efficient one-way system for breathing that involved passing spent air through sacs distributed inside their bodies. They were air-cooled, which meant they could get very big without cooking themselves. They were lighter than they looked, literally full of hot air, and these advantages — lightweight structure, fast-running metabolism, air cooling — made their evolution into birds possible.

Here’s another tale: the make-up of our teeth — enamel over dentine over bone — is the same as you’d find in the armoured skin of the earliest fishes.

To braid such interconnected wonders into a book the size of a modest novel is essentially an exercise in precis, and a bravura demonstration of the editor’s art. Though the book (whose virtue is its brevity) is not illustrated, there six timelines to guide us through the scalar shifts necessary to comprehend the staggering longueurs involved in bringing a planet to life. Life was entirely stationary and mostly slimy until only about 600 million years ago. Just ten million years ago, grasses evolved, and with them, grazing animals and their predators, some of whom, the primates, were on their way to making us. The earliest Sapiens appeared just over half a million years ago. Only when sea levels fell, around 120,000 years ago, did Sapiens get to migrate around the planet.

As one reads Gee’s “(very) short history”, one feels time slowing down and growing more granular. This deceleration gives Gee the space he needs to depict the burgeoning complexity of life as it spreads and evolves. It’s a scalar game that’s reminiscent of Charles and Ray Eames’s 1967 films *Powers of Ten*, which depicted the relative scale of the Universe by zooming in (through the atom) and out (through the cosmos) at logarithmic speed. It’s a dizzying and exhilarating technique which, for all that, makes clear sense out of very complex narratives.

Eventually — and long after we are gone — life will retreat beneath the earth as the swelling sun makes conditions on the planet’s surface impossible. The distinctions between things will fall away as life, struggling to live, becomes colossal, colonial and homogenous. Imagine vast subterranean figs, populated by evolved, worm-like insects…

Then, your mind reeling, try and work out what on earth people mean when they say that humans have conquered and/or despoiled the planet.

Our planet deserves our care, for sure, because we have to live here. But the planet has yet to register our existence, and probably never will. We are, Gee explains, just two and a half million years into a series of ice ages that will last for tens of millions of years more. Our species’ story extends not much beyond one of these hundreds of cycles. The human-induced injection of carbon dioxide “will set back the date of the next glacial advance” — and that is all. 250 million years hence, any future prospectors (and they won’t be human), armed with equipment “of the most refined sensitivity”, might — just might — be able to detect that, a short way through the Cenozoic Ice Age, *something happened*, “but they might be unable to say precisely what.”

It takes a long time to bring complex life to a planet, and complex life, once it runs out of wriggle room, collapses in an instant. Humans already labour under a considerable “extinction debt” since they have made their habitat (“nothing less than the entire Earth”) progressively less habitable. Most everything that ever went extinct fell into the same trap. What makes our case tragic is that we’re conscious of what we’ve done; we’re trying to do something about it; and we know that, in the long run, it will never be enough.

Gee’s final masterstroke as editor is to make human sense, and real tragedy, from his unwieldy story’s glaring spoiler: that Life dies at the end.

“Grotesque, awkward, and disagreeable”

Reading Stanislaw Lem’s Dialogues for the Times, 5 October 2021

Some writers follow you through life. Some writers follow you beyond the grave. I was seven when Andrei Tarkovsky filmed Lem’s satirical sci-fi novel Solaris, thirty seven when Steven Soderbergh’s very different (and hugely underrated) Solaris came out, forty when Lem died. Since then, a whole other Stanslaw Lem has arisen, reflected in philosophical work that, while widely available elsewhere, had to wait half a century or more for an English translation. In life I have nursed many regrets: that I didn’t learn Polish is not the least of them.

The point about Lem is that he writes about the future, predicting the way humanity’s inveterate tinkering will enable, pervert and frustrate its ordinary wants and desires. This isn’t “the future of technology” or “the future of the western world” or “the future of the environment”. It’s neither “the future as the author would like it to be”, nor “the future if the present moment outstayed its welcome”. Lem knows a frightening amount of science, and even more about technology, but what really matters is what he knows about people. His writing is not just surprisingly prescient; it’s timeless.

Dialogues is about cybernetics, the science of systems. A system is any material arrangement that responds to environmental feedback. A steam engine is a mere mechanism, until you add the governor that controls its internal pressure. Then it becomes a system. When Lem was writing, systems thinking was meant to transform everything, conciliating between the physical sciences and the humanities to usher in a technocratic Utopia.

Enthusiastic as 1957-vintage Lem was, there is something deliciously levelling about how he introduces the cybernetic idea. We can bloviate all we like about using data and algorithms to create a better society; what drives Philonous and Hylas’s interest in these eight dialogues (modelled on Berkeley’s Three Dialogues of 1713) is Hylas’s desperate desire to elude Death. This new-fangled science of systems reimagines the world as information, and the thing about information is that it can be transmitted, stored and (best of all) copied. Why then can’t it transmit, store and copy poor Death-haunted Hylas?

Well, of course, that’s certainly do-able, Philonous agrees — though Hylas might find cybernetic immortality “grotesque, awkward, and disagreeable”. Sure enough, Hylas baulks at Philomous’s culminating vision of humanity immortalised in serried ranks of humming metal cabinets.

This image certainly was prescient: Cybernetics was supposed to be a philosophy, one that would profoundly change our understanding of the animate and inanimate world. The philosophy failed to catch on, but its insights created something utterly unexpected: the computer.

Dialogues is important now because it describes (or described, rather, more than half a century ago — you can almost hear Lem’s slow hand-clapping from the Beyond) all the ways we do not comprehend the world we have made.

Cybernetics teaches us that systems are animate. It doesn’t matter what a system is made from. Workers in an office, onse and zeroes clouding a chip, proteins folding and refolding in a living cell, string and pulleys in a playground: are all good building materials for systems, and once a system is up and running, it is no longer reducible to its parts. It’s a distinct, unified whole, shaped by its past history and actively coexisting with its environment, and exhibiting behavior that cannot be precisely predicted from its structure. “If you insist on calling this new system a mechanism,” Lem remarks, drily, “then you must apply that term to living beings as well.”

We’ve yet to grasp this nettle: that between the living and non-living worlds sits a world of systems, unalive yet animate. No wonder, lacking this insight, we spend half our lives sneering at the mechanisms we do understand (“Alexa, stop calling my Mum!”) and the other half on our knees, worshipping the mechanisms we don’t. (“It says here on Facebook…”) The very words we use — “artificial intelligence” indeed! — reveal the paucity of our understanding.

“Lem understood, as no-one then or since has understood, how undeserving of worship are the systems (be they military, industrial or social) that are already strong enough to determine our fate. A couple of years ago, around the time Hong Kong protesters were destroying facial recognition towers, a London pedestrian was fined £90 for hiding his face from an experimental Met camera. The consumer credit reporting company Experian uses machine learning to decide the financial trustworthiness of over a billion people. China’s Social Credit System (actually the least digitised of China’s surveillance systems) operates under multiple, often contradictory legal codes.

The point about Lem is not that he was terrifyingly smart (though he was that); it’s that he had skin in the game. He was largely self-taught, because he had to quit university after writing satirical pieces about Soviet poster-boy Trofim Lysenko (who denied the existence of genes). Before that, he was dodging Nazis in Lv’v (and mending their staff cars so that they would break down). In his essay “Applied Cybernetics: An Example from Sociology”, Lem uses the new-fangled science of systems to anatomise the Soviet thinking of his day, and from there, to explain how totalitarianism is conceived, spread and performed. Worth the price of the book in itself, this little essay is a tour de force of human sympathy and forensic fury, shorter than Solzhenitsyn, and much, much funnier than Hannah Arendt.

Peter Butko’s translations of the Dialogues, and the revisionist essays Lem added to the 1971 second edition, are as witty and playful as Lem’s allusive Polish prose demands. His endnotes are practically a book in themselves (and an entertaining one, too).

Translated so well, Lem needs no explanation, no contextualisation, no excuse-making. Lem’s expertise lay in technology, but his loyalty lay with people, in all their maddening tolerance for bad systems. “There is nothing easier than to create a state in which everyone claims to be completely satisfied,” he wrote; “being stretched on the bed, people would still insist — with sincerity — that their life is perfectly fine, and if there was any discomfort, the fault lay in their own bodies or in their nearest neighbor.”

 

“It’s wonderful what a kid can do with an Erector Set”

Reading Across the Airless Wilds by Earl Swift for the Times, 7 August 2021

There’s something about the moon that encourages, not just romance, not just fancy, but also a certain silliness. It was there in spades at the conference organised by the American Rocket Society in Manhattan in 1961. Time Magazine delighted in this “astonishing exhibition of the phony and the competent, the trivial and the magnificent.” (“It’s wonderful what a kid can do with an Erector Set”, one visiting engineer remarked.)

But the designs on show thefre were hardly any more bizarre than those put forward by the great minds of the era. The German rocket pioneer Hermann Oberth wrote an entire book advocating a moon car that could, if necessary, pogo-stick about the satellite. When Howard Seifert, the American Rocket Society’s president, advocated abandoning the car and preserving the pogo stick — well, Siefert’s “platform” might not have made it to the top of NASA’s favoured designs for a moon vehicle, but it was taken seriously.

Earl Swift is not above a bit of fun and wonder, but the main job of Across the Airless Wilds (a forbiddingly po-faced title for such an enjoyable book) is to explain how the oddness of the place — barren, airless, and boasting just one-sixth Earth’s gravity — tended to favour some very odd design solutions. True, NASA’s lunar rover, which actually flew on the last three Apollo missions, looks relatively normal, like a car (or at any rate, a go-kart). But this was really to do with weight constraints, budgets and historical accidents; a future in which the moon is explored by pogo-stick is still not quite out of the running.

For all its many rabbit-holes, this is a clear and compelling story about three men: Sam Romano, boss of General Motors’s lunar program, his visionary off-road specialist Mieczyslaw Gregory Bekker (Greg to his American friends) and Greg’s invaluable engineer Ferenc (Frank) Pavlics. These three were toying with the possibility of moon vehicles a full two years before the US boasted any astronauts, and the problems they confronted were not trivial. Until Bekker came along, tyres, wheels and tracks for different surfaces were developed more or less through informed trial and error. It was Bekker who treated off-roading as an intellectual puzzle as rigorous as the effort to establish the relationship between a ship’s hull and water, or a plane’s wing and the air it rides.

Not that rigour could gain much toe-hold in the early days of lunar design, since no-one could be sure what the consistency of the moon’s surface actually was. It was probably no dustier than an Earthbound desert, but there was always the nagging possibility that a spacecraft and its crew, landing on a convenient lunar plain, might vanish into some ghastly talcum quicksand.

On 3 February 1966 the Soviet probe Luna 9 put paid to that idea, settling, firmly and without incident, onto the Ocean of Storms. Though their plans for a manned mission had been abandoned, the Soviets were no bit player. Four years later it was an eight-wheel Soviet robot, Lunokhod-17, that first drove across the moon’s surface. Seven feet long and four feet tall, it upstaged NASA’s rovers nicely, with its months and miles of journey time, 25 soil samples and literally thousands of photographs.

Meanwhile NASA was having to re-imagine its Lunar Roving Vehicle any number of times, as it sought to wring every possible ounce of value from a programme that was being slashed by Congress a good year before Neil Armstrong even set foot on the Moon.

Conceived when it was assumed Apollo would be the first chapter in a long campaign of exploration and settlement, the LRV was being shrunk and squeezed and simplified to fit through an ever-tightening window of opportunity. This is the historical meat of Swift’s book, and he handles the technical, institutional and commercial complexities of the effort with a dramatist’s eye.

Apollo was supposed to pave the way for two-rocket missions. When they vanished from the schedule, the rover’s future hung in doubt. Without a second Saturn to carry cargo, any rover bound for the moon would have to be carried on the same lunar module that carried the crew. No-one knew if this was even possible.

There was, however, one wedge-shaped cavity still free between the descent stage’s legs: an awkward triangle “about the size and shape of a pup tent standing on its end.” So it was that the LRV, tht once boasted six wheels and a pressurised cabin, ended up the machine a Brompton folding bike wants to be when it grows up.

Ironically, it was NASA’s dwindling prospects post-Apollo that convinced its managers to origami something into that tiny space, just a shade over seventeen months prior to launch. Why not wring as much value out of Apollo’s last missions as possible?

The result was a triumph, though it maybe didn’t look like one. Its seats were basically deckchairs. It had neither roof, nor body. There was no steering wheel, just a T-bar the astronaut lent on. It weighed no more than one fully kitted-out astronaut, and its electric motors ground out just one horsepower. On the flat, it reached barely ten miles an hour.

But it was superbly designed for the moon, where a turn at 6MPH had it fishtailing like a speedboat, even as it bore more than twice its weight around an area the size of Manhattan.

In a market already oversaturated with books celebrating the 50th anniversary of Apollo in 2019 (many of them very good indeed) Swift finds his niche. He’s not narrow: there’s plenty of familiar context here, including a powerful sketch of the former Nazi rocket scientist Wernher von Braun. He’s not especially folksy, or willfully eccentric: the lunar rover was a key element in the Apollo program, and he wants it taken seriously. Swift finds his place by much more ingenious means — by up-ending the Apollo narrative entirely (he would say he was turning it right-side up) so that every earlier American venture into space was preparation for the last three trips to the moon.

He sets out his stall early, drawing a striking contrast between the travails of Apollo 14 astronauts Alan Shepard Jr and Edgar Mitchell — slugging half a mile up the the wall of the wrong crater, dragging a cart — with the vehicular hijinks of Apollo 15’s Dave Scott and Jim Irwin, crossing a mile of hummocky, cratered terrain rimmed on two sides by mountains the size of Everest, to a spectacular gorge, then following its edge to the foot of a huge mountain, then driving up its side.

Detailed, thrilling accounts of the two subsequent Rover-equipped Apollo missions, Apollo 16 in the Descartes highlands and Apollo 17 in the Taurus-Littrow Valley, carry the pointed message that the viewing public began to tune out of Apollo just as the science, the tech, and the adventure had gotten started.

Swift conveys the baffling, unreadable lunar landscape very well, but Across the Airless Wilds is above all a human story, and a triumphant one at that, about NASA’s most-loved machine. “Everybody you meet will tell you he worked on the rover,” remarks Eugene Cowart, Boeing’s chief engineer on the project. “You can’t find anybody who didn’t work on this thing.”

Nothing but the truth

Reading The Believer by Ralph Blumenthal for the Times, 24 July 2021

In September 1965 John Fuller, a columnist for the Saturday Review in New York, was criss-crossing Rockingham County in New Hampshire in pursuit of a rash of UFO sightings, when he stumbled upon a darker story — one so unlikely, he didn’t follow it up straight away.

Not far from the local Pease Air Force base, a New Hampshire couple had been abducted and experimented upon by aliens.

Every few years, ever since the end of the Second World War, others had claimed similar experiences. But they were few and scattered, their accounts were incredible and florid, and there was never any corroborating physical evidence for their allegations. It took decades before anyone in academia took an interest in their plight.

In January 1990 the artist Budd Hopkins, whose Intruders Foundation provided support for “experiencers” — alleged victims of alien abduction — was visited by John Edward Mack, head of psychiatry at Harvard’s medical school. Mack’s interest had been piqued by his friend the psychoanalyst Robert Lifton. An old hand at treating severe trauma, particularly among Hiroshima survivors and Vietnam veterans, Lifton found himself stumped when dealing with experiencers: “It wasn’t clear to me or to anybody else exactly what the trauma was.”

Mack was immediately intrigued. Highly strung, narcissistic, psychologically damaged by his mother’s early death, Mack needed a deep intellectual project to hold himself together. He was interested in how perceptions and beliefs about reality shape society. A Prince of Our Disorder, his Pulitzer Prize-winning psychological biography of T E Lawrence, was his most intimate statement on the subject. Work on the psychology of the Cold War had drawn him into anti-nuclear activism, and close association with the International Physicians for the Prevention of Nuclear War, which won a Nobel peace prize in 1985. The institutions he created to explore the frontiers of human experience survive today in the form of the John E. Mack Institute, dedicated “to further[ing] the evolution of the paradigms by which we understand human identity”.

Just as important, though, Mack enjoyed helping people, and he was good at it. In 1964 he had established mental health services in Cambridge, Mass., where hundreds of thousands were without any mental health provision at all. As a practitioner, he had worked particularly with children and adolescents, had treated suicidal patients, and published research on heroin addiction.

Whitley Streiber (whose book Communion, about his own alien abduction, is one of the single most disturbing books ever to reach the bestseller lists) observed how Mack approached experiencers: “He very intentionally did not want to look too deeply into the anomalous aspects of the reports,” Streiber writes. “He felt his approach as a physician should be to not look beyond the narrative but to approach it as a source of information about the individual’s state.”

But what was Mack opening himself up to? What to make of all that abuse, pain, paralysis, loss of volition and forced ejaculation? In 1992, at a forum for work-in-progress, Mack explained, “There’s a great deal of curiosity they [the alien abductors] seem to have in staring at us, particularly in sexual situations. Often there are hybrid infants that seem to be the result of alien-human sexual cohabitation.”

Experiencers were traumatised, but not just traumatised. “When I got home,” said one South African experiencer, “it was like the world, all the trees would just go down, and there’d be no air and people would be dying.”

Experiencers reported a pressing, painful awareness of impending environmental catastrophe; also a tremendous sense of empathy, extending across the whole living world. Some felt optimistic, even euphoric: for these were now recruited in a project to save life on Earth. as part, they explained, of the aliens’ breeding programme.

John Mack championed hypnotic regression, as a means of helping his clients discover buried memories. Ralph Blumenthal, a reporter for the New York Times, is careful not to use hindsight to condemn this approach, but as he explains, the satanic abuse scandals that erupted in the 1990s were to reveal just how easily false memories can be implanted, even inadvertently, in people made suggestible by hypnosis.

In May 1994 the Dean of Harvard Medical School appointed a committee of peers to confidentially review Mack’s interactions with experiencers. Mack was exonerated. Still, it was a serious and reputationally damaging shot across the bows, in a field coming to grips with the reality of implanted and false memories.

Passionate, unfaithful, a man for whom life was often “just a series of obligations”, Mack did not so much “go off the deep end” after that as wade, steadily and with determination, into ever deeper water. The saddest passage in Blumenthal’s book describes Mack’s trip in 2004 to Stonehenge in Wiltshire. Surrounded by farm equipment that could easily have been used to create them, Mack absorbs the cosmic energy of crop circles and declares, “There isn’t anybody in the world who’s going to convince me this is manmade.”

Blumenthal steers his narrative deftly between the crashing rocks of breathless credulity on the one hand, and psychoanalytic second-guessing on the other. Drop all mention of the extraterrestrials, and The Believer remains a riveting human document. Mack’s abilities, his brilliance, flaws, hubris, and mania, are anatomised with a painful sensitivity. Readers will close the book wiser than when they opened it, and painfully aware of what they do not and perhaps can never know about Mack, about extraterrestrials, and about the nature of truth.

Mack became a man easy to dismiss. His “experiencers” remain, however, “blurring ontological categories in defiance of all our understandings of how things operate in the world”. Time and again, Blumenthal comes back to this: there’s no pathology to explain them. Not alcoholism. Not mental illness. Not sexual abuse. Not even a desire for attention. Aliens are engaged in a breakneck planet-saving obstetric intervention, involving probes. You may not like it. You may point to the lack of any physical evidence for it. But — and here Blumenthal holds the reader quite firmly and thrillingly to the ontological razor’s edge — you cannot say it’s all in people’s heads. You have no solid reason at all, beyond incredulity, to suppose that abductees are telling you anything other than the truth.