How to prevent the future

Reading Gerd Gigerenzer’s How to Stay Smart in a Smart World for the Times, 26 February 2022

Some writers are like Moses. They see further than everybody else, have a clear sense of direction, and are natural leaders besides. These geniuses write books that show us, clearly and simply, what to do if we want to make a better world.

Then there are books like this one — more likeable, and more honest — in which the author stumbles upon a bottomless hole, sees his society approaching it, and spends 250-odd pages scampering about the edge of the hole yelling at the top of his lungs — though he knows, and we know, that society is a machine without brakes, and all this shouting comes far, far too late.

Gerd Gigerenzer is a German psychologist who has spent his career studying how the human mind comprehends and assesses risk. We wouldn’t have lasted even this long as a species if we didn’t negotiate day-to-day risks with elegance and efficiency. We know, too, that evolution will have forced us formulate the quickest, cheapest, most economical strategies for solving our problems. We call these strategies “heuristics”.

Heuristics are rules of thumb, developed by extemporising upon past experiences. They rely on our apprehension of, and constant engagement in, the world beyond our heads. We can write down these strategies; share them; even formalise them in a few lines of light-weight computer code.

Here’s an example from Gigerenzer’s own work: Is there more than one person in that speeding vehicle? Is it slowing down as ordered? Is the occupant posing any additional threat?

Abiding by the rules of engagement set by this tiny decision tree reduces civilian casualties at military checkpoints by more than sixty per cent.

We can apply heuristics to every circumstance we are likely to encounter, regardless of the amount of data available. The complex algorithms that power machine learning, on the other hand, “work best in well-defined, stable situations where large amounts of data are available”.

What happens if we decide to hurl 200,000 years of heuristics down the toilet, and kneel instead at the altar of occult computation and incomprehensibly big data?

Nothing good, says Gigerenzer.

How to Stay Smart is a number of books in one, none of which, on its own, is entirely satisfactory.

It is a digital detox manual, telling us how our social media are currently weaponised, designed to erode our cognition (but we can fill whole shelves with such books).

It punctures many a rhetorical bubble around much-vaunted “artificial intelligence”, pointing out how easy it is to, say, get a young man of colour charged without bail using proprietary risk-assessment software. (In some notorious cases the software had been trained on, and so was liable to perpetuate, historical injustices.) Or would you prefer to force an autonomous car to crash by wearing a certain kind of T-shirt? (Simple, easily generated pixel patterns cause whole classes of networks to draw bizarre inferential errors about the movement of surrounding objects.) This is enlightening stuff, or it would be, were the stories not quite so old.

One very valuable section explains why forecasts derived from large data sets become less reliable, the more data they are given. In the real world, problems are unbounded; the amount of data relevant to any problem is infinite. This is why past information is a poor guide to future performance, and why the future always wins. Filling a system with even more data about what used to happen will only bake in the false assumptions that are already in your system. Gigerenzer goes on to show how vested interests hide this awkward fact behind some highly specious definitions of what a forecast is.

But the most impassioned and successful of these books-within-a-book is the one that exposes the hunger for autocratic power, the political naivety, and the commercial chicanery that lie behind the rise of “AI”. (Healthcare AI is a particular bugbear: the story of how the Dutch Cancer Society was suckered into funding big data research, at the expense of cancer prevention campaigns that were shown to work, is especially upsetting).

Threaded through this diverse material is an argument Gigerenzer maybe should have made at the beginning: that we are entering a new patriarchal age, in which we are obliged to defer, neither to spiritual authority, nor to the glitter of wealth, but to unliving, unconscious, unconscionable systems that direct human action by aping human wisdom just well enough to convince us, but not nearly well enough to deliver happiness or social justice.

Gigerenzer does his best to educate and energise us against this future. He explains the historical accidents that led us to muddle cognition with computation in the first place. He tells us what actually goes on, computationally speaking, behind the chromed wall of machine-learning blarney. He explains why, no matter how often we swipe right, we never get a decent date; he explains how to spot fake news; and he suggests how we might claw our minds free of our mobile phones.

But it’s a hopeless effort, and the book’s most powerful passages explain exactly why it is hopeless.

“To improve the performance of AI,” Gigerenzer explains, “one needs to make the physical environment more stable and people’s behaviour more predictable.”

In China, the surveillance this entails comes wrapped in Confucian motley: under its social credit score system, sincerity, harmony and wealth creation trump free speech. In the West the self-same system, stripped of any ethic, is well advanced thanks to the efforts of the credit-scoring industry. One company, Acxiom, claims to have collected data from 700 million people worldwide, and up to 3000 data points for each individual (and quite a few are wrong).

That this bumper data harvest is an encouragement to autocratic governance hardly needs rehearsing, or so you would think.

And yet, in a 2021 study of 3,446 digital natives, 96 per cent “do not know how to check the trustworthiness of sites and posts.” I think Gigerenzer is pulling his punches here. What if, as seems more likely, 96 per cent of digital natives can’t be bothered to check the trustworthiness of sites and posts?

Asked by the author in a 2019 study how much they would be willing to spend each month on ad-free social media — that is, social media not weaponised against the user — 75 per cent of respondents said they would not pay a cent.

Have we become so trivial, selfish, short-sighted and penny-pinching that we deserve our coming subjection? Have we always been servile at heart, for all our talk of rights and freedoms; desperate for some grown-up come tug at our leash, and bring us to heal?

You may very well think so. Gigerenzer could not possibly comment. He does, though, remark that operant conditioning (the kind of learning explored in the 1940s by behaviourist B F Skinner, that occurs through rewards and punishments) has never enjoyed such political currency, and that “Skinner’s dream of a society where the behaviour of each member is strictly controlled by reward has become reality.”

How to Stay Smart in a Smart World is an optimistic title indeed for a book that maps, with passion and precision, a hole down which we are already plummeting.

How to live an extra life

Reading Sidarta Ribeiro’s The Oracle of Night: The History and Science of Dreams for the Times, 2 January 2022

Early in January 1995 Sidarta Ribeiro, a Brazilian student of neuroscience, arrived in New York City to study for his doctorate at Rockefeller University. He rushed enthusiastically into his first meeting — only to discover he could not understand a word people were saying. He had, in that minute, completely forgotten the English language.

It did not return. He would turn up for work, struggle to make sense of what was going on, and wake up, hours later, on his supervisor’s couch. The colder and snowier the season became, the more impossible life got until, “when February came around, in the deep silence of the snow, I gave in completely and was swallowed up into the world of Morpheus.”

Ribeiro struggled into lectures so he didn’t get kicked out; otherwise he spent the entire winter in bed, sleeping; dozing; above all, dreaming.

April brought a sudden and extraordinary recovery. Ribeiro woke up understanding English again, and found he could speak it more fluently than ever before. He befriended colleagues easily, drove research, and, in time, announced the first molecular evidence of Freud’s “day residue” hypothesis, in which dreams exist to process memories of the previous day.

Ribeiro’s rich dream life that winter convinced him that it was the dreams themselves — and not just the napping — that had wrought a cognitive transformation in him. Yet dreams, it turned out, had fallen almost entirely off the scientific radar.

The last dream researcher to enter public consciousness was probably Sigmund Freud. Freud at least seemed to draw coherent meaning from dreams — dreams that had been focused to a fine point by fin de siecle Vienna’s intense milieu of sexual repression.

But Freud’s “royal road to the unconscious” has been eroded since by a revolution in our style of living. Our great-grandparents could remember a world without artificial light. Now we play on our phones until bedtime, then get up early, already focused on a day that is, when push comes to shove, more or less identical to yesterday. We neither plan our days before we sleep, nor do we interrogate our dreams when we wake. It it any wonder, then, that our dreams are no longer able to inspire us? When US philosopher Owen Flanagan says that “dreams are the spandrels of sleep”, he speaks for almost all of us.

Ribeiro’s distillation of his life’s work offers a fascinating corrective to this reductionist view. His experiments have made Freudian dream analysis and other elements of psychoanalytic theory definitively testable for the first time — and the results are astonishing. There is material evidence, now, for the connection Freud made between dreaming and desire: both involve the selective release of the brain chemical dopamine.

The middle chapters of The Oracle of Night focus on the neuroscience, capturing, with rare candour, all the frustrations, controversies, alliances, ambiguities and accidents that make up a working scientists’ life.

To study dreams, Ribeiro explains, is to study memories: how they are received in the hippocampus, then migrate out through surrounding cortical tissue, “burrowing further and further in as life goes on, ever more extensive and resistant to disturbances”. This is why some memories can survive, even for more than a hundred years, in a brain radically altered by the years.

Ribeiro is an excellent communicator of detail, and this is important, given the size and significance of his claims. “At their best,” he writes, “dreams are the actual source of our future. The unconscious is the sum of all our memories and of all their possible combinations. It comprises, therefore, much more than what we have been — it comprises all that we can be.”

To make such a large statement stick, Ribeiro is going to need more than laboratory evidence, and so his scientific account is generously bookended with well-evidenced anthropological and archaeological speculation. Dinosaurs enjoyed REM sleep, apparently — a delightfully fiendish piece of deduction. And was the Bronze Age Collapse, around 1200 BC, triggered by a qualitative shift how we interpreted dreams?

These are sizeable bread slices around an already generous Christmas-lunch sandwich. On page 114, when Ribeiro declares that “determining a point of departure for sleep requires that we go back 4.5 billion years and imagine the conditions in which the first self-replicating molecules appeared,” the poor reader’s heart may quail and their courage falter.

A more serious obstacle — and one quite out of Ribeiro’s control — is that friend (we all have one) who, feet up on the couch and both hands wrapped around the tea, baffs on about what their dreams are telling them. How do you talk about a phenomenon that’s become the sinecure of people one would happily emigrate to avoid?

And yet, by taking dreams seriously, Bibeiro must also talk seriously about shamanism, oracles, prediction and mysticism. This is only reasonable, if you think about it: dreams were the source of shamanism (one of humanity’s first social specialisations), and shamanism in its turn gave us medicine, philosophy and religion.

When lives were socially simple and threats immediate, the relevance of dreams was not just apparent; it was impelling. Even a stopped watch is correct twice a day. With a limited palette of dream materials to draw from, was it really so surprising that Rome’s first emperor Augustus found his rise to power predicted by dreams — at least according to his biographer Suetonius? “By simulating objects of desire and aversion,” Ribeiro argues, “the dream occasionally came to represent what would in fact happen”.

Growing social complexity enriches dream life, but it also fragments it (which may explain all those complaints that the gods have fallen silent, which we find in texts dated between 1200 to 800 BC). The dreams typical of our time, says Ribeiro, are “a blend of meanings, a kaleidoscope of wants, fragmented by the multiplicity of desires of our age”.

The trouble with a book of this size and scale is that the reader, feeling somewhat punch-drunk, can’t help but wish that two or three better books had been spun from the same material. Why naps are good for us, why sleep improves our creativity, how we handle grief — these are instrumentalist concerns that might, under separate covers, have greatly entertained us. In the end, though, I reckon Ribeiro made the right choice. Such books give us narrow, discrete glimpses into the power of dreams, but leave us ignorant of their real nature. Ribeiro’s brick of a book shatters our complacency entirely, and for good.

Dreaming is a kind of thinking. Treating dreams as spandrels — as so much psychic “junk code” — is not only culturally illiterate — it runs against everything current science is telling us. You are a dreaming animal, says Ribeiro, for whom “dreams are like stars: they are always there, but we can only see them at night”.

Keep a dream diary, Ribeiro insists. So I did. And as I write this, a fortnight on, I am living an extra life.

“A moist and feminine sucking”

Reading Susan Wedlich’s Slime: A natural history for the Times, 6 November 2021

For over two thousand years, says science writer Susan Wedlich, quoting German historian Richard Hennig, maritime history has been haunted by mention of a “congealed sea”. Ships, it is said, have been caught fast and even foundered in waters turned to slime.

Slime stalks the febrile dreams of landlubbers, too: Jean-Paul Sartre succumbed to its “soft, yielding action, a moist and feminine sucking”, in a passage, lovingly quoted here, that had this reader instinctively scrabbling for the detergent.

We’ve learned to fear slime, in a way that would have seemed quite alien to the farmers of ancient Egypt, who supposed slime and mud were the base materials of life itself. So, funnily enough, did German zoologist Ernst Haeckel, a champion of Charles Darwin, who saw primordial potential in the gellid lumps being trawled from the sea floor by various oceanographic expeditions. (This turned out to be calcium sulphate, precipitated by the chemical reaction between deep-sea mud and alcohol used for the preservation of aquatic specimens. Haeckel never quite got over his disappointment.)

For Susan Wedlich, it is not enough that we should learn about slime; nor even that we should be entertained by it (though we jolly well are). Wendlich wants us to care deeply about slime, and musters all the rhetorical at her disposal to achieve her goal. “Does even the word “slime” have to elicit gagging histrionics?” she exclaims, berating us for our phobia: “if we neither recognize nor truly know slime, how are we supposed to appreciate it or use it for our own ends?”

This is overdone. Nor do we necessarily know enough about slime to start shouting about it. To take one example, using slime to read our ecological future turns out to be a vexed business. There’s a scum of nutrients held together by slime floating on top of the oceans. A fraction of a millimetre thick, it’s called the “sea-surface micro-layer”. Global warming might be thinning it, or thickening it, and doing either might be increasing the chemical transport taking place between air and ocean — or retarding it — to unknown effect. So there: yet another thing to worry about.

For sure, slime holds the world together. Slimes, rather: there are any number of ways to stiffen water so that it acts as a lubricant, a glue, or a barrier. Whatever its origins, it is most conspicuous when it disappears — as when overtilling of America’s Great Plains caused the Dust Bowl in 1933, or when the gluey glycan coating of one’s blood vessels starts to mysteriously shear away during surgery.

There was a moment, in the 1920s, when slime shed its icky materiality and became almost cool. Artists both borrowed from and inspired Haeckel’s exquisite drawings of delicate maritime invertebrates. And biologists, looking for the mechanisms underpinning memory and heredity, would have liked nothing more than to find that the newly-identified protoplasm within our every cell was recording, like an Edison drum, the tremblings of a ubiquitous, information-rich aether. (Sounds crazy now, but the era was, after all, bathing in X-rays and other newly-discovered radiations.)

But slime’s moment of modishness passed. Now it’s the unlovely poster-child of environmental degradation: the stuff that will fill our soon-to-be-empty oceans, “home only to jellyfish, algae and microbial mats”, if we don’t do something sharpish to change our ecological ways.

Hand in hand with such millennial anxieties, of course, come the usual power fantasies: that we might harness all this unlovely slime — nothing more than water held in a cage of a few long-chain polymers — to transform our world, providing the base for new materials and soft robots, “transparent, stretchable, locomotive, biocompatible, remote-controlled, weavable, wearable, self-healing and shape-morphing, 3D-printed or improved by different ingredients”.

Wedlich’s enthusiasm is by no means misplaced. Slime is not just a largely untapped wonder material. It is also — really, truly — the source of life, and a key enabler of complex forms. We used to think the machinery of the first cells must have risen in clay hydrogels — a rather complicated and unlikely genesis — but it turns out that nucleic acids like DNA and RNA can sometimes form slimes on their own. Life, it turns out, does not need a substrate on which to arise. It is its own sticky home.

Slime’s effective barrier to pathogens may then have enabled complex tissues to differentiate and develop, slickly sequestered from a disease-ridden outside world. Wedlich’s tour of the human gut, and its multiple slime layers, (some lubricant, some gluey, and many armed with extraordinary electrostatic and molecular traps for one pathogen or another) is a tour de force of clear and gripping explanation.

Slime being, in essence, nothing more than stiffened water, there are more ways to make it than the poor reader could ever bare to hear about. So Wedlich very sensibly approaches her subject from the other direction, introducing slimes through their uses. Snails combine gluey and lubricating slimes to travel over dry ground one moment, cling to the underside of a leaf the next. Hagfish deter predators by jellifying the waters around them, shooting polymers from their skin like so many thousands of microscopic harpoons. Some squid, when threatened, add slime to their ink to create pseudomorphs — fake squidoids that hold together just long enough to distract a predator. Some squid pump out whole legions of such doppelgangers.

Wedlich’s own strategy, in writing Slime, is not dissimilar. She’s deliberately elusive. The reader never really feels they’ve got hold of the matter of her book; rather, they’re being provoked into punching through layer after dizzying layer, through masterpieces of fin de siecle glass-blowing into theories about the spontaneous generation of life, through the lifecycles of carnivorous plants into the tactics of Japanese balloon-bomb designers in the second world war, until, dizzy and gasping, they reach the end of Wedlich’s extraordinary mystery tour, not with a handle on slime exactly, but with an elemental and exultant new vision of what life may be: that which arises when the boundaries of earth, air and water are stirred in sunlight’s fire. It’s a vision that, for all its weight of well-marshalled modern detail, is one Aristotle would have recognised.

Life dies at the end

Reading Henry Gee’s A (Very) Short History of Life on Earth for the Times, 23 October 2021

The story of life on Earth is around 4.6 billion years long. We’re here to witness the most interesting bit (of course we are; our presence makes it interesting) and once we’re gone (wiped out in an eyeblink, or maybe, just maybe, speciated out of all recognition) the story will run on, and run down, for about another billion years, before the Sun incinerates the Earth.

It’s an epic story, and like most epic stories, it cries out for a good editor. In Henry Gee, a British palaeontologist and senior editor of the scientific journal Nature, it has found one. But Gee has his work cut out. The story doesn’t really get going until the end. The first two thirds are about slime. And once there are living things worth looking at, they keep keeling over. All the interesting species burn up and vanish like candles lit at both ends. Humans (the only animal we know of that’s even aware that this story exists) will last no time at all. And the five extinction events this planet has so far undergone might make you seriously wonder why life bothered in the first place.

We are told, for example, how two magma plumes in the late Permian killed this story just as it got going, wiping out nineteen of every species in the sea, and one out of every ten on land. It would take humans another 500 years of doing exactly what they’ve been doing since the Industrial Revolution to cause anything like that kind of damage.

A word about this: we have form in wiping things out and then regretting their loss (mammoths, dodos, passenger pigeons). And we really must stop mucking about with the chemistry of the air. But we’re not planet-killers. “It is not the Sixth Extinction,” Henry Gee reassures us. “At least, not yet.”

It’s perhaps a little bit belittling to cast Gee’s achievement here as mere “editing”. Gee’s a marvellously engaging writer, juggling humour, precision, polemic and poetry to enrich his impossibly telescoped account. His description of the lycopod forests that are the source of nearly all our coal — and whose trees grew only to reproduce, exploding into a crown of spore-bearing branches — brings to mind a battlefield of the First World War, a “craterscape of hollow stumps, filled with a refuse of water and death… rising from a mire of decay.” A little later a Lystrosaurus (a distant ancestor of mammals, and the most successful land animal ever) is sketched as having “the body of a pig, the uncompromising attitude toward food of a golden retriever, and the head of an electric can opener”.

Gee’s book is full of such dazzling walk-on parts, but most impressive are the elegant numbers he traces across evolutionary time. Here’s one: dinosaurs, unlike mammals, evolved a highly efficient one-way system for breathing that involved passing spent air through sacs distributed inside their bodies. They were air-cooled, which meant they could get very big without cooking themselves. They were lighter than they looked, literally full of hot air, and these advantages — lightweight structure, fast-running metabolism, air cooling — made their evolution into birds possible.

Here’s another tale: the make-up of our teeth — enamel over dentine over bone — is the same as you’d find in the armoured skin of the earliest fishes.

To braid such interconnected wonders into a book the size of a modest novel is essentially an exercise in precis, and a bravura demonstration of the editor’s art. Though the book (whose virtue is its brevity) is not illustrated, there six timelines to guide us through the scalar shifts necessary to comprehend the staggering longueurs involved in bringing a planet to life. Life was entirely stationary and mostly slimy until only about 600 million years ago. Just ten million years ago, grasses evolved, and with them, grazing animals and their predators, some of whom, the primates, were on their way to making us. The earliest Sapiens appeared just over half a million years ago. Only when sea levels fell, around 120,000 years ago, did Sapiens get to migrate around the planet.

As one reads Gee’s “(very) short history”, one feels time slowing down and growing more granular. This deceleration gives Gee the space he needs to depict the burgeoning complexity of life as it spreads and evolves. It’s a scalar game that’s reminiscent of Charles and Ray Eames’s 1967 films *Powers of Ten*, which depicted the relative scale of the Universe by zooming in (through the atom) and out (through the cosmos) at logarithmic speed. It’s a dizzying and exhilarating technique which, for all that, makes clear sense out of very complex narratives.

Eventually — and long after we are gone — life will retreat beneath the earth as the swelling sun makes conditions on the planet’s surface impossible. The distinctions between things will fall away as life, struggling to live, becomes colossal, colonial and homogenous. Imagine vast subterranean figs, populated by evolved, worm-like insects…

Then, your mind reeling, try and work out what on earth people mean when they say that humans have conquered and/or despoiled the planet.

Our planet deserves our care, for sure, because we have to live here. But the planet has yet to register our existence, and probably never will. We are, Gee explains, just two and a half million years into a series of ice ages that will last for tens of millions of years more. Our species’ story extends not much beyond one of these hundreds of cycles. The human-induced injection of carbon dioxide “will set back the date of the next glacial advance” — and that is all. 250 million years hence, any future prospectors (and they won’t be human), armed with equipment “of the most refined sensitivity”, might — just might — be able to detect that, a short way through the Cenozoic Ice Age, *something happened*, “but they might be unable to say precisely what.”

It takes a long time to bring complex life to a planet, and complex life, once it runs out of wriggle room, collapses in an instant. Humans already labour under a considerable “extinction debt” since they have made their habitat (“nothing less than the entire Earth”) progressively less habitable. Most everything that ever went extinct fell into the same trap. What makes our case tragic is that we’re conscious of what we’ve done; we’re trying to do something about it; and we know that, in the long run, it will never be enough.

Gee’s final masterstroke as editor is to make human sense, and real tragedy, from his unwieldy story’s glaring spoiler: that Life dies at the end.

“Grotesque, awkward, and disagreeable”

Reading Stanislaw Lem’s Dialogues for the Times, 5 October 2021

Some writers follow you through life. Some writers follow you beyond the grave. I was seven when Andrei Tarkovsky filmed Lem’s satirical sci-fi novel Solaris, thirty seven when Steven Soderbergh’s very different (and hugely underrated) Solaris came out, forty when Lem died. Since then, a whole other Stanslaw Lem has arisen, reflected in philosophical work that, while widely available elsewhere, had to wait half a century or more for an English translation. In life I have nursed many regrets: that I didn’t learn Polish is not the least of them.

The point about Lem is that he writes about the future, predicting the way humanity’s inveterate tinkering will enable, pervert and frustrate its ordinary wants and desires. This isn’t “the future of technology” or “the future of the western world” or “the future of the environment”. It’s neither “the future as the author would like it to be”, nor “the future if the present moment outstayed its welcome”. Lem knows a frightening amount of science, and even more about technology, but what really matters is what he knows about people. His writing is not just surprisingly prescient; it’s timeless.

Dialogues is about cybernetics, the science of systems. A system is any material arrangement that responds to environmental feedback. A steam engine is a mere mechanism, until you add the governor that controls its internal pressure. Then it becomes a system. When Lem was writing, systems thinking was meant to transform everything, conciliating between the physical sciences and the humanities to usher in a technocratic Utopia.

Enthusiastic as 1957-vintage Lem was, there is something deliciously levelling about how he introduces the cybernetic idea. We can bloviate all we like about using data and algorithms to create a better society; what drives Philonous and Hylas’s interest in these eight dialogues (modelled on Berkeley’s Three Dialogues of 1713) is Hylas’s desperate desire to elude Death. This new-fangled science of systems reimagines the world as information, and the thing about information is that it can be transmitted, stored and (best of all) copied. Why then can’t it transmit, store and copy poor Death-haunted Hylas?

Well, of course, that’s certainly do-able, Philonous agrees — though Hylas might find cybernetic immortality “grotesque, awkward, and disagreeable”. Sure enough, Hylas baulks at Philomous’s culminating vision of humanity immortalised in serried ranks of humming metal cabinets.

This image certainly was prescient: Cybernetics was supposed to be a philosophy, one that would profoundly change our understanding of the animate and inanimate world. The philosophy failed to catch on, but its insights created something utterly unexpected: the computer.

Dialogues is important now because it describes (or described, rather, more than half a century ago — you can almost hear Lem’s slow hand-clapping from the Beyond) all the ways we do not comprehend the world we have made.

Cybernetics teaches us that systems are animate. It doesn’t matter what a system is made from. Workers in an office, onse and zeroes clouding a chip, proteins folding and refolding in a living cell, string and pulleys in a playground: are all good building materials for systems, and once a system is up and running, it is no longer reducible to its parts. It’s a distinct, unified whole, shaped by its past history and actively coexisting with its environment, and exhibiting behavior that cannot be precisely predicted from its structure. “If you insist on calling this new system a mechanism,” Lem remarks, drily, “then you must apply that term to living beings as well.”

We’ve yet to grasp this nettle: that between the living and non-living worlds sits a world of systems, unalive yet animate. No wonder, lacking this insight, we spend half our lives sneering at the mechanisms we do understand (“Alexa, stop calling my Mum!”) and the other half on our knees, worshipping the mechanisms we don’t. (“It says here on Facebook…”) The very words we use — “artificial intelligence” indeed! — reveal the paucity of our understanding.

“Lem understood, as no-one then or since has understood, how undeserving of worship are the systems (be they military, industrial or social) that are already strong enough to determine our fate. A couple of years ago, around the time Hong Kong protesters were destroying facial recognition towers, a London pedestrian was fined £90 for hiding his face from an experimental Met camera. The consumer credit reporting company Experian uses machine learning to decide the financial trustworthiness of over a billion people. China’s Social Credit System (actually the least digitised of China’s surveillance systems) operates under multiple, often contradictory legal codes.

The point about Lem is not that he was terrifyingly smart (though he was that); it’s that he had skin in the game. He was largely self-taught, because he had to quit university after writing satirical pieces about Soviet poster-boy Trofim Lysenko (who denied the existence of genes). Before that, he was dodging Nazis in Lv’v (and mending their staff cars so that they would break down). In his essay “Applied Cybernetics: An Example from Sociology”, Lem uses the new-fangled science of systems to anatomise the Soviet thinking of his day, and from there, to explain how totalitarianism is conceived, spread and performed. Worth the price of the book in itself, this little essay is a tour de force of human sympathy and forensic fury, shorter than Solzhenitsyn, and much, much funnier than Hannah Arendt.

Peter Butko’s translations of the Dialogues, and the revisionist essays Lem added to the 1971 second edition, are as witty and playful as Lem’s allusive Polish prose demands. His endnotes are practically a book in themselves (and an entertaining one, too).

Translated so well, Lem needs no explanation, no contextualisation, no excuse-making. Lem’s expertise lay in technology, but his loyalty lay with people, in all their maddening tolerance for bad systems. “There is nothing easier than to create a state in which everyone claims to be completely satisfied,” he wrote; “being stretched on the bed, people would still insist — with sincerity — that their life is perfectly fine, and if there was any discomfort, the fault lay in their own bodies or in their nearest neighbor.”

 

“It’s wonderful what a kid can do with an Erector Set”

Reading Across the Airless Wilds by Earl Swift for the Times, 7 August 2021

There’s something about the moon that encourages, not just romance, not just fancy, but also a certain silliness. It was there in spades at the conference organised by the American Rocket Society in Manhattan in 1961. Time Magazine delighted in this “astonishing exhibition of the phony and the competent, the trivial and the magnificent.” (“It’s wonderful what a kid can do with an Erector Set”, one visiting engineer remarked.)

But the designs on show thefre were hardly any more bizarre than those put forward by the great minds of the era. The German rocket pioneer Hermann Oberth wrote an entire book advocating a moon car that could, if necessary, pogo-stick about the satellite. When Howard Seifert, the American Rocket Society’s president, advocated abandoning the car and preserving the pogo stick — well, Siefert’s “platform” might not have made it to the top of NASA’s favoured designs for a moon vehicle, but it was taken seriously.

Earl Swift is not above a bit of fun and wonder, but the main job of Across the Airless Wilds (a forbiddingly po-faced title for such an enjoyable book) is to explain how the oddness of the place — barren, airless, and boasting just one-sixth Earth’s gravity — tended to favour some very odd design solutions. True, NASA’s lunar rover, which actually flew on the last three Apollo missions, looks relatively normal, like a car (or at any rate, a go-kart). But this was really to do with weight constraints, budgets and historical accidents; a future in which the moon is explored by pogo-stick is still not quite out of the running.

For all its many rabbit-holes, this is a clear and compelling story about three men: Sam Romano, boss of General Motors’s lunar program, his visionary off-road specialist Mieczyslaw Gregory Bekker (Greg to his American friends) and Greg’s invaluable engineer Ferenc (Frank) Pavlics. These three were toying with the possibility of moon vehicles a full two years before the US boasted any astronauts, and the problems they confronted were not trivial. Until Bekker came along, tyres, wheels and tracks for different surfaces were developed more or less through informed trial and error. It was Bekker who treated off-roading as an intellectual puzzle as rigorous as the effort to establish the relationship between a ship’s hull and water, or a plane’s wing and the air it rides.

Not that rigour could gain much toe-hold in the early days of lunar design, since no-one could be sure what the consistency of the moon’s surface actually was. It was probably no dustier than an Earthbound desert, but there was always the nagging possibility that a spacecraft and its crew, landing on a convenient lunar plain, might vanish into some ghastly talcum quicksand.

On 3 February 1966 the Soviet probe Luna 9 put paid to that idea, settling, firmly and without incident, onto the Ocean of Storms. Though their plans for a manned mission had been abandoned, the Soviets were no bit player. Four years later it was an eight-wheel Soviet robot, Lunokhod-17, that first drove across the moon’s surface. Seven feet long and four feet tall, it upstaged NASA’s rovers nicely, with its months and miles of journey time, 25 soil samples and literally thousands of photographs.

Meanwhile NASA was having to re-imagine its Lunar Roving Vehicle any number of times, as it sought to wring every possible ounce of value from a programme that was being slashed by Congress a good year before Neil Armstrong even set foot on the Moon.

Conceived when it was assumed Apollo would be the first chapter in a long campaign of exploration and settlement, the LRV was being shrunk and squeezed and simplified to fit through an ever-tightening window of opportunity. This is the historical meat of Swift’s book, and he handles the technical, institutional and commercial complexities of the effort with a dramatist’s eye.

Apollo was supposed to pave the way for two-rocket missions. When they vanished from the schedule, the rover’s future hung in doubt. Without a second Saturn to carry cargo, any rover bound for the moon would have to be carried on the same lunar module that carried the crew. No-one knew if this was even possible.

There was, however, one wedge-shaped cavity still free between the descent stage’s legs: an awkward triangle “about the size and shape of a pup tent standing on its end.” So it was that the LRV, tht once boasted six wheels and a pressurised cabin, ended up the machine a Brompton folding bike wants to be when it grows up.

Ironically, it was NASA’s dwindling prospects post-Apollo that convinced its managers to origami something into that tiny space, just a shade over seventeen months prior to launch. Why not wring as much value out of Apollo’s last missions as possible?

The result was a triumph, though it maybe didn’t look like one. Its seats were basically deckchairs. It had neither roof, nor body. There was no steering wheel, just a T-bar the astronaut lent on. It weighed no more than one fully kitted-out astronaut, and its electric motors ground out just one horsepower. On the flat, it reached barely ten miles an hour.

But it was superbly designed for the moon, where a turn at 6MPH had it fishtailing like a speedboat, even as it bore more than twice its weight around an area the size of Manhattan.

In a market already oversaturated with books celebrating the 50th anniversary of Apollo in 2019 (many of them very good indeed) Swift finds his niche. He’s not narrow: there’s plenty of familiar context here, including a powerful sketch of the former Nazi rocket scientist Wernher von Braun. He’s not especially folksy, or willfully eccentric: the lunar rover was a key element in the Apollo program, and he wants it taken seriously. Swift finds his place by much more ingenious means — by up-ending the Apollo narrative entirely (he would say he was turning it right-side up) so that every earlier American venture into space was preparation for the last three trips to the moon.

He sets out his stall early, drawing a striking contrast between the travails of Apollo 14 astronauts Alan Shepard Jr and Edgar Mitchell — slugging half a mile up the the wall of the wrong crater, dragging a cart — with the vehicular hijinks of Apollo 15’s Dave Scott and Jim Irwin, crossing a mile of hummocky, cratered terrain rimmed on two sides by mountains the size of Everest, to a spectacular gorge, then following its edge to the foot of a huge mountain, then driving up its side.

Detailed, thrilling accounts of the two subsequent Rover-equipped Apollo missions, Apollo 16 in the Descartes highlands and Apollo 17 in the Taurus-Littrow Valley, carry the pointed message that the viewing public began to tune out of Apollo just as the science, the tech, and the adventure had gotten started.

Swift conveys the baffling, unreadable lunar landscape very well, but Across the Airless Wilds is above all a human story, and a triumphant one at that, about NASA’s most-loved machine. “Everybody you meet will tell you he worked on the rover,” remarks Eugene Cowart, Boeing’s chief engineer on the project. “You can’t find anybody who didn’t work on this thing.”

Nothing but the truth

Reading The Believer by Ralph Blumenthal for the Times, 24 July 2021

In September 1965 John Fuller, a columnist for the Saturday Review in New York, was criss-crossing Rockingham County in New Hampshire in pursuit of a rash of UFO sightings, when he stumbled upon a darker story — one so unlikely, he didn’t follow it up straight away.

Not far from the local Pease Air Force base, a New Hampshire couple had been abducted and experimented upon by aliens.

Every few years, ever since the end of the Second World War, others had claimed similar experiences. But they were few and scattered, their accounts were incredible and florid, and there was never any corroborating physical evidence for their allegations. It took decades before anyone in academia took an interest in their plight.

In January 1990 the artist Budd Hopkins, whose Intruders Foundation provided support for “experiencers” — alleged victims of alien abduction — was visited by John Edward Mack, head of psychiatry at Harvard’s medical school. Mack’s interest had been piqued by his friend the psychoanalyst Robert Lifton. An old hand at treating severe trauma, particularly among Hiroshima survivors and Vietnam veterans, Lifton found himself stumped when dealing with experiencers: “It wasn’t clear to me or to anybody else exactly what the trauma was.”

Mack was immediately intrigued. Highly strung, narcissistic, psychologically damaged by his mother’s early death, Mack needed a deep intellectual project to hold himself together. He was interested in how perceptions and beliefs about reality shape society. A Prince of Our Disorder, his Pulitzer Prize-winning psychological biography of T E Lawrence, was his most intimate statement on the subject. Work on the psychology of the Cold War had drawn him into anti-nuclear activism, and close association with the International Physicians for the Prevention of Nuclear War, which won a Nobel peace prize in 1985. The institutions he created to explore the frontiers of human experience survive today in the form of the John E. Mack Institute, dedicated “to further[ing] the evolution of the paradigms by which we understand human identity”.

Just as important, though, Mack enjoyed helping people, and he was good at it. In 1964 he had established mental health services in Cambridge, Mass., where hundreds of thousands were without any mental health provision at all. As a practitioner, he had worked particularly with children and adolescents, had treated suicidal patients, and published research on heroin addiction.

Whitley Streiber (whose book Communion, about his own alien abduction, is one of the single most disturbing books ever to reach the bestseller lists) observed how Mack approached experiencers: “He very intentionally did not want to look too deeply into the anomalous aspects of the reports,” Streiber writes. “He felt his approach as a physician should be to not look beyond the narrative but to approach it as a source of information about the individual’s state.”

But what was Mack opening himself up to? What to make of all that abuse, pain, paralysis, loss of volition and forced ejaculation? In 1992, at a forum for work-in-progress, Mack explained, “There’s a great deal of curiosity they [the alien abductors] seem to have in staring at us, particularly in sexual situations. Often there are hybrid infants that seem to be the result of alien-human sexual cohabitation.”

Experiencers were traumatised, but not just traumatised. “When I got home,” said one South African experiencer, “it was like the world, all the trees would just go down, and there’d be no air and people would be dying.”

Experiencers reported a pressing, painful awareness of impending environmental catastrophe; also a tremendous sense of empathy, extending across the whole living world. Some felt optimistic, even euphoric: for these were now recruited in a project to save life on Earth. as part, they explained, of the aliens’ breeding programme.

John Mack championed hypnotic regression, as a means of helping his clients discover buried memories. Ralph Blumenthal, a reporter for the New York Times, is careful not to use hindsight to condemn this approach, but as he explains, the satanic abuse scandals that erupted in the 1990s were to reveal just how easily false memories can be implanted, even inadvertently, in people made suggestible by hypnosis.

In May 1994 the Dean of Harvard Medical School appointed a committee of peers to confidentially review Mack’s interactions with experiencers. Mack was exonerated. Still, it was a serious and reputationally damaging shot across the bows, in a field coming to grips with the reality of implanted and false memories.

Passionate, unfaithful, a man for whom life was often “just a series of obligations”, Mack did not so much “go off the deep end” after that as wade, steadily and with determination, into ever deeper water. The saddest passage in Blumenthal’s book describes Mack’s trip in 2004 to Stonehenge in Wiltshire. Surrounded by farm equipment that could easily have been used to create them, Mack absorbs the cosmic energy of crop circles and declares, “There isn’t anybody in the world who’s going to convince me this is manmade.”

Blumenthal steers his narrative deftly between the crashing rocks of breathless credulity on the one hand, and psychoanalytic second-guessing on the other. Drop all mention of the extraterrestrials, and The Believer remains a riveting human document. Mack’s abilities, his brilliance, flaws, hubris, and mania, are anatomised with a painful sensitivity. Readers will close the book wiser than when they opened it, and painfully aware of what they do not and perhaps can never know about Mack, about extraterrestrials, and about the nature of truth.

Mack became a man easy to dismiss. His “experiencers” remain, however, “blurring ontological categories in defiance of all our understandings of how things operate in the world”. Time and again, Blumenthal comes back to this: there’s no pathology to explain them. Not alcoholism. Not mental illness. Not sexual abuse. Not even a desire for attention. Aliens are engaged in a breakneck planet-saving obstetric intervention, involving probes. You may not like it. You may point to the lack of any physical evidence for it. But — and here Blumenthal holds the reader quite firmly and thrillingly to the ontological razor’s edge — you cannot say it’s all in people’s heads. You have no solid reason at all, beyond incredulity, to suppose that abductees are telling you anything other than the truth.

An engine for understanding

Reading Fundamentals by Frank Wilczek for the Times, 2 January 2021

It’s not given to many of us to work at the bleeding edge of theoretical physics, discovering for ourselves the way the world really works.

The nearest most of us will ever get is the pop-science shelf, and this has been dominated for quite a while now by the lyrical outpourings of Italian theoretical physicist Carlo Rovelli. Rovelli’s upcoming one, Helgoland, promises to have his reader tearing across a universe made, not of particles, but of the relations between them.

It’s all too late, however: Frank Wilczek’s Fundamentals has gazzumped Rovelli handsomely, with a vision that replaces our classical idea of physical creation — “atoms and the void” — with one consisting entirely of spacetime, self-propagating fields and properties.

Born in 1951 and awarded the Nobel Prize in Physics in 2004 for figuring out why atoms don’t just fly apart, Wilczek is out to explain why “the history of Sweden is more complicated than the history of the universe”. The ingredients of the universe are surprisingly simple, but their fates, playing out through time in accordance with just a handful of rules, generate a world of unimaginable complexity, contingency and abundance. Measures of spin, charge and mass allow us to describe the whole of physical reality, but they won’t help us at all in depicting, say, the history of the royal house of Bernadotte.

Wilczek’s “ten keys to reality”, mentioned in his subtitle, aren’t to do with the 19 or so physical constants that exercised Martin Rees, the UK’s Astronomer Royal, in his 1990s pop-science heyday. The focus these days has shifted more to the spirit of things. When Wilczek describes the behaviour of electrons around an atom, for example, gone are the usual Böhr-ish mechanics, in which electrons leap from one nuclear orbit to another. Instead we get a vibrating cymbal, the music of the spheres, a poetic understanding of fields, and not a fragment of matter in sight.

So will you plump for the Wilzcek, or will you wait for the Rovelli? A false choice, of course; this is not a race. Popular cosmology is more like the jazz scene: the facts (figures, constants, models) are the standards everyone riffs off. After one or two exposures you find yourself returning for the individual performances: their poetry, their unique expression.

Wilczek’s ten keys are more like ten book ideas, exploring the spatial and temporal abundance of the universe; how it all began; the stubborn linearity of time; how it all will end. What should we make of his decision to have us swallow the whole of creation in one go?

In one respect this book was inevitable. It’s what people of Wilczek’s peculiar genius and standing do. There’s even a sly name for the effort: the philosopause. The implication here being that Wilczek has outlived his most productive years and is now pursuing philosophical speculations.

Wilzcek is not short of insights. His idea of what the scientific method consists of is refreshingly robust: a style of thinking that “combines the humble discipline of respecting the facts and learning from Nature with the systematic chutzpah of using what you think you’ve learned aggressively”. If you apply what you think you’ve discovered everywhere you can, even in situations that have nothing to do with your starting point, then, if it works, “you’ve discovered something useful; it it doesn’t, then you’ve learned something important.”

However, works of the philosopause are best judged on character. Richard Dawkins seems to have discovered, along with Johnny Rotten, that anger is an energy. Martin Rees has been possessed by the shade of that dutiful bureaucrat C P Snow. And in this case? Wilczek, so modest, so straight-dealing, so earnest in his desire to conciliate between science and the rest of culture, turns out to be a true visionary, writing — as his book gathers pace — a human testament to the moment when the discipline of physics, as we used to understand it, came to a stop.

Wilczek’s is the first generation whose intelligence — even at the far end of the bell-curve inhabited by genius — is insufficient to conceptualise its own scientific findings. Machines are even now taking over the work of hypothesis-making and interpretation. “The abilities of our machines to carry lengthy yet accurate calculations, to store massive amounts of information, and to learn by doing at an extremely fast pace,” Wilczek explains, “are already opening up qualitatively new paths toward understanding. They will move the frontier of knowledge in directions, and arrive at places, that unaided human brains can’t go.”

Or put it this way: physicists can pursue a Theory of Everything all they like. They’ll never find it, because if they did find it, they wouldn’t understand it.

Where does that leave physics? Where does that leave Wilczek? His response is gloriously matter-of-fact:

“… really, this should not come as fresh news. Humans themselves know many things that are not available to human consciousness, such as how to process visual information at incredible speeds, or how to make their bodies stay upright, walk and run.”

Right now physicists have come to the conclusion that the vast majority of mass in the universe reacts so weakly to the bits of creation we can see, we may never know its nature. Though Wilczek makes a brave stab at the problem of so-called “dark matter”, he is equally prepared to accept that a true explanation may prove incomprehensible.

Human intelligence turns out to be just one kind of engine for understanding. Wilzcek would have us nurture it and savour it, and not just for what it can do, but because it is uniquely ours.

“I heard the rustling of the dress for two whole hours”

By the end of the book I had come to understand why kindness and cruelty cannot vanquish each other, and why, irrespective of our various ideas about social progress, our sexual and gender politics will always teeter, endlessly and without remedy, between “Orwellian oppression and the Hobbesian jungle”…

Reading Strange Antics: A history of seduction by Clement Knox, 1 February 2020

“If we’re going to die, at least give us some tits”

The Swedes are besieging the city of Brno. A bit of Googling reveals the year to be 1645. Armed with pick and shovel, the travelling entertainer Tyll Ulenspiegel is trying to undermine the Swedish redoubts when the shaft collapses, plunging him and his fellow miners into utter darkness. It’s difficult to establish even who is still alive and who is dead. “Say something about arses,” someone begs the darkness. “Say something about tits. If we’re going to die, at least give us some tits…”

Reading Daniel Kehlmann’s Tyll for the Times, 25 January 2020