Heading north

Reading Forecast by Joe Shute for the Telegraph, 28 June 2021

As a child, journalist Joe Shute came upon four Ladybird nature books from the early 1960s called What to Look For. They described “a world in perfect balance: weather, wildlife and people all living harmoniously as the seasons progress.”

Today, he writes, “the crisply defined seasons of my Ladybird series, neatly quartered like an apple, are these days a mush.”

Forecast is a book about phenology: the study of lifecycles, and how they are affected by season, location and other factors. Unlike behemothic “climate science”, phenology doesn’t issue big data sets or barnstorming visualisations. Its subject cannot be so easily metricised. How life responds to changes in the seasons, and changes in those changes, and changes in the rates of those changes, is a multidimensional study whose richness would be entirely lost if abstracted. Instead, phenology depends on countless parochial diaries describing changes on small patches of land.

Shute, who for more than a decade has used his own diary to fuel the “Weather Watch” column in the Daily Telegraph, can look back and see “where the weather is doing strange things and nature veering spectacularly off course.” Watching his garden coming prematurely to life in late winter, Shute is left “with a slightly sickly sensation… I started to sense not a seasonal cycle, but a spiral.” (130)

Take Shute’s diary together with countless others and tabulate the findings, and you will find that all life has started shifting northwards — insects at a rate of five metres a day, some dragonflies at between 17 and 28 metres a day.

How to write about this great migration? Immediately following several affecting and quite horrifying eye-witness scenes from the global refugee crisis, Shute writes: “The same climate crisis that is rendering swathes of the earth increasingly inhospitable and driving so many young people to their deaths, is causing a similar decline in migratory bird populations.”

I’m being unkind to make a point (in context the passage isn’t nearly so wince-making), but Shute’s not the first to discover it’s impossible to speak across all scales of the climate crisis at once.

Amitav Ghosh’s 2016 The Great Derangement is canonical here. Ghosh explained in painful detail why the traditional novel can’t handle global warming. Here, Shute seems to be proving the same point for non-fiction — or at least, for non-fiction of the meditative sort.

Why doesn’t Shute reach for abstractions? Why doesn’t he reach for climate science, and for the latest IPCC report? Why doesn’t he bloviate?

No, Shute’s made of sterner stuff: he would rather go down with his corracle, stitching together a planet on fire (11 wildfires raging in the Arctic circle in July 2018), human catastrophe, bird armageddon, his and his partner’s fertility problems, and the snore of a sleeping dormouse, across just 250 pages.

And the result? Forecast is a triumph of the most unnerving sort. By the end it’s clearly not Shute’s book that’s coming unstuck: it’s us. Shute begins his book asking “what happens to centuries of folklore, identity and memory when the very thing they subsist on is changing, perhaps for good”, and the answer he arrives at is horrific: folklore, identity and memory just vanish. There is no reverse gear to this thing.

I was delighted (if that is quite the word) to see Shute nailing the creeping unease I’ve felt every morning since 2014. That was the year the Met Office decided to give storms code-names. The reduction of our once rich, allusive weather vocabulary to “weather bombs” and “thunder snow”, as though weather events were best captured in “the sort of martial language usually preserved for the defence of the realm” is Shute’s most telling measure of how much, in this emergency, we have lost of ourselves.


Reading Sentient by Jackie Higgins for the Times, 19 June 2021

In May 1971 a young man from Portsmouth, Ian Waterman, lost all sense of his body. He wasn’t just numb. A person has a sense of the position of their body in space. In Waterman, that sense fell away, mysteriously and permanently.

Waterman, now in his seventies, has learned to operate his body rather as the rest of us operate a car. He has executive control over his movements, but no very intimate sense of what his flesh is up to.

What must this be like?

In a late chapter of her epic account of how the senses make sense, and exhibiting the kind of left-field thinking that makes for great TV documentaries, writer-director-producer Jackie Higgins goes looking for answers among the octopuses.

The octopus’s brain, you see, has no fine control over its arms. They pretty much do their own thing. They do, though, respond to the occasional high-level executive order. “Tally-ho!” cries the brain, and the arms gallop off, the brain in no more (or less) control of its transport system than a girl on a pony at a gymkhana.

Is being Ian Waterman anything like being an octopus? Attempts to imagine our way into other animals’ experiences — or other people’s experience, for that matter — have for a long time fallen under the shadow of an essay written in 1974 by American philosopher Thomas Nagel.

“What Is It Like to Be a Bat?” wasn’t about bats so much as to do with consciousness (continuity of). I can, with enough tequila inside me) imagine what it would be like for me to be a bat. But that’s not the same as knowing what’s it’s like for a bat to be a bat.

Nagel’s lesson in gloomy solipsism is all very well in philosophy. Applied to natural history, though — where even a vague notion of what a bat feels like might help a naturalist towards a moment of insight — it merely sticks the perfect in the way of the good. Every sparky natural history writer cocks a snook at poor Nagel whenever the opportunity arises.

Advances in media technology over the last twenty years (including, for birds, tiny monitor-stuffed backpacks) have deluged us in fine-grained information about how animals behave. We now have a much better idea of what (and how) they feel.

Now, you can take this sort of thing only so far. The mantis shrimp (not a shrimp; a scampi) has up to sixteen kinds of narrow-band photoreceptor, each tuned to a different wavelength of light! Humans only have three. Does this mean that the mantis shrimp enjoys better colour vision than we do?

Nope. The mantis shrimp is blind to colour, in the human sense of the word, perceiving only wavelengths. The human brain meanwhile, by processing the relative intensities of those three wavelengths of colour vision, distinguishes between millions of colours. (Some women have four colour receptors, which is why you should never argue with a woman about which curtains match the sofa.)

What about the star-nosed mole, whose octopus-like head is a mass of feelers? (Relax: it’s otherwise quite cute, and only about 2cm long.) Its weird nose is sensitive: it gathers the same amount of information about what it touches, as a regular rodent’s eye gathers about what it sees. This makes the star-nosed mole the fastest hunter we know of, identifying and capturing prey (worms) in literally less than an eyeblink.

What can such a creature tell us about our own senses? A fair bit, actually. That nose is so sensitive, the mole’s visual cortex is used the process the information. It literally sees through its nose.

But that turns out not to be so very strange: Braille readers, for example, really do read through their fingertips, harnessing their visual cortex to the task. One veteran researcher, Paul Bach-y-Rita, has been building prosthetic eyes since the 1970s, using glorified pin-art machines to (literally) impress the visual world upon his volunteers’ backs, chests, even their tongues.

From touch to sound: in the course of learning about bats, I learned here that blind people have been using echolocation for years, especially when it rains (more auditory information, you see); researchers are only now getting a measure of their abilities.

How many senses are there that we might not have noticed? Over thirty, it seems, all served by dedicated receptors, and many of them elude our consciousness entirely. (We may even share the magnetic sense enjoyed by migrating birds! But don’t get too excited. Most mammals seem to have this sense. Your pet dog almost always pees with its head facing magnetic north.)

This embarrassment of riches leaves Higgins having to decide what to include and what to leave out. There’s a cracking chapter here on how animals sense time, and some exciting details about a sense of touch common to social mammals: one that responds specifically to cuddling.

On the other hand there’s very little about our extremely rare ability to smell what we eat while we eat it. This retronasal olfaction gives us a palate unrivalled in the animal kingdom, capable of discriminating between nearly two trillion savours: and ability which has all kinds of implications for memory and behaviour.

Is this a problem? Not at all. For all that it’s stuffed with entertaining oddities, Sentient is not a book about oddities, and Higgins’s argument, though colourful, is rigorous and focused. Over 400 exhilarating pages, she leads us to adopt an entirely unfamiliar way of thinking about the senses.

Because their mechanics are fascinating and to some degree reproduceable (the human eye is, mechanically speaking, very much like a camera) we grow up thinking of the senses as mechanical outputs.

Looking at our senses this way, however, is rather like studying fungi but only looking at the pretty fruiting bodies. The real magic of fungi is their networks. And the real magic of our senses is the more than 100 billion nerve cells in each human nervous system — greater, Higgins says, than the number of stars in the Milky Way.

And that vast complexity — adapting to reflect and organise the world, not just over evolutionary time but also over the course of an individual life — gives rise to all kinds of surprises. In some humans, the ability to see with sound. In vampire bats (who can sense the location of individual veins to sink their little fangs into), the ability to detect heat using receptors that in most other mammals are used to detect acute pain.

In De Anima, the ancient philosopher Aristotle really let the side down in listing just five senses. No one expects him to have spotted exotica like cuddlesomeness and where-to-face-when-you-pee. But what about pain? What about balance? What about proprioception?

Aristotle’s restrictive and mechanistic list left him, and generations after him, with little purchase on the subject. Insights have been hard to come by.

Aristotle himself took one look at the octopus and declared it stupid.

Let’s see him driving a car with eight legs.

Variation and brilliance

Reading Barnabas Calder’s Architecture: from prehistory to climate emergency for New Scientist, 9 June 2021

For most of us, buildings are functional. We live, work, and store things in them. They are as much part of us as the nest is a part of a community of termites.

And were this all there was to say about buildings, architectural historian Barnabas Calder might have found his book easier to write. Calder wants to ask “how humanity’s access to energy has shaped the world’s buildings through history.” And had his account remained so straightforward, we might have ended up with an eye-opening mathematical description of the increase the energy available for work — derived first from wood, charcoal and straw, then from coal, then from oil — and how it first transformed, and (because of global warming) now threatens our civilisation.

And sure enough the book is full of startling statistics. (Fun fact: the charcoal equivalent of today’s cement industry would have to cover an area larger than Australia in coppiced timber.)

But of course, buildings aren’t simply functional. They’re aspirational acts of creative expression. However debased it might seem, the most ordinary structure is a work of a species of artist, and to get built at all it must be bankrolled by people who are (at least relatively) wealthy and powerful. This was as true of the buildings of Uruk (our first known city, founded in what is now Iraq around 3200 BCE) as it is of the buildings of Shenzhen (in 1980 a Chinese fishing hamlet, today a city of nearly 13 million people).

While the economics of the build environment are crucially important, then, they don’t really make sense without the sociology, and even the psychology, especially when it comes to “the mutual stirring of hysteria between architect and client” that gave us St Peter’s Basilica in the 16th century and Chengdu’s New Century Global Center (currently the world’s biggest building) in the 21st.

Calder knows this: “What different societies chose to do with [their] energy surplus has produced endless variation and brilliance,” he says. So if sometimes his account seems to wander, this is why: architecture itself is not a wholly economic activity, and certainly not a narrowly rational one.

At the end of an insightful and often impassioned journey through the history of buildings, Calder does his level best to explain how architecture can address the climate emergency. But his advices and encouragements vanish under the enormity of the crisis. The construction and running of buildings account for 39 per cent of all human greenhouse gas emissions. Concrete is the most used material on Earth after water. And while there is plenty of “sustainability” talk in the construction sector, Calder finds precious little sign of real change. We still demolish too often, and build too often, using unsustainable cement, glass and steel.

It may be that solutions are out there, but are simply invisible. The history of architecture is curiously incomplete, as Calder himself acknowledges, pointing out that “entire traditions of impressive tent-like architecture are known mainly from pictures rather than physical remnants.”

Learning to tread more lightly on the earth means exactly that: a wholly sustainable architecture wouldn’t necessarily show up in the archaeological record. The remains of pre-fossil fuel civilisations can, then, only offer us a partial guide to what our future architecture should look like.

Perhaps we should look to existing temporary structures — to refugee camps, perhaps. The idea may be distressing, but fashions change.

Calder’s long love-poem to buildings left me, rather paradoxically, thinking about the Mongols of the 13th century, for whom a walled city was a symbol of bondage and barbarism.

They would have no more settled in a fixed house than they would have submitted to slavery. And their empire, which covered 23 million square kilometres, demolished more architecture than it raised.

Nothing happens without a reason

Reading Journey to the Edge of Reason: The Life of Kurt Gödel by Stephen Budiansky for the Spectator, 29 May 2021

The 20th-century Austrian mathematician Kurt Gödel did his level best to live in the world as his philosophical hero Gottfried Wilhelm Leibnitz imagined it: a place of pre-established harmony, whose patterns are accessible to reason.

It’s an optimistic world, and a theological one: a universe presided over by a God who does not play dice. It’s most decidedly not a 20th-century world, but “in any case”, as Gödel himself once commented, “there is no reason to trust blindly in the spirit of the time.”

His fellow mathematician Paul Erdös was appalled: “You became a mathematician so that people should study you,” he complained, “not that you should study Leibnitz.” But Gödel always did prefer study to self-expression, and is this is chiefly why we know so little about him, and why the spectacular deterioration of his final years — a fantasmagoric tale of imagined conspiracies, strange vapours and shadowy intruders, ending in his self-starvation in 1978 — has come to stand for the whole of his life.

“Nothing, Gödel believed, happened without a reason,” says Stephen Burdiansky. “It was at once an affirmation of ultrarationalism, and a recipe for utter paranoia.”

You need hindsight to see the paranoia waiting to pounce. But the ultrarationalism — that was always tripping him up. There was something worryingly non-stick about him. He didn’t so much resist the spirit of the time as blunder about totally oblivious of it. He barely noticed the Anschluss, barely escaped Vienna as the Nazis assumed control, and, once ensconced at the Institute for Advanced Study at Princeton, barely credited that tragedy was even possible, or that, say, a friend might die in a concentration camp (it took three letters for his mother to convince him).

Many believed that he’d blundered, in a way typical to him, into marriage with his life-long partner, a foot-care specialist and divorcée called Adele Nimbursky. Perhaps he did. But Burdiansky does a spirited job of defending this “uneducated but determined” woman against the sneers of snobs. If anyone kept Gödel rooted to the facts of living, it was Adele. She once stuck a concrete flamingo, painted pink and black, in a flower bed right outside his study window. All evidence suggests he adored it.

Idealistic and dysfunctional, Gödel became, in mathematician Jordan Ellenberg’s phrase, “the romantic’s favourite mathematician”, a reputation cemented by the fact that we knew hardly anything about him. Key personal correspondence was destroyed at his death, while his journals and notebooks — written in Gabelsberger script, a German shorthand that had fallen into disuse by the mid-1920s — resisted all-comers until Cheryl Dawson, wife of the man tasked with sorting through Gödel’s mountain of posthumous papers — learned how to transcribe it all.

Biographer Stephen Budiansky is the first to try to give this pile of new information a human shape, and my guess is it hasn’t been easy.

Burdiansky handles the mathematics very well, capturing the air of scientific optimism that held sway over the intellectual Vienna and induced Germany’s leading mathematician David Hilbert to declare that “in mathematics there is *nothing* unknowable!”

Solving Hilbert’s four “Problems of Laying Foundations for Mathematics” of 1928 was supposed to secure the foundations of mathematics for good, and Gödel, a 22-year-old former physics student, solved one of them. Unfortunately for Hilbert and his disciples, however, Gödel also proved the insolubility of the other three. So much for the idea that all mathematics could be derived from the propositions of logic: Gödel demonstrated that logic itself was flawed.

This discovery didn’t worry Gödel nearly so much as it did his contemporaries. For Gödel, as Burdiansky explains, “Mathematical objects and a priori truth was as real to him as anything the senses could directly perceive.” If our reason failed, well, that was no reason to throw away the world: we would always be able to recognise some truths through intuition that could never be established through computation. That, for Gödel, was the whole point of being human.

It’s one thing to be a Platonist in a world dead set against Platonism, or an idealist in the world that’s gone all-in with materialism. It’s quite another to see acts of sabotage in the errors of TV listings magazines, or political conspiracy in the suicide of King Ludwig II of Bavaria. The Elysian calm and concentration afforded Gödel after the second world war at the Institute of Advanced Study probably did him more harm than good. “Gödel is too alone,” his friend Oskar Morgenstern fretted: “he should be given teaching duties; at least an hour a week.”

In the end, though, neither his friendships nor his marriage nor that ridiculous flamingo could tether to the Earth a man who had always preferred to write for his desk drawer, and Burdiansky, for all his tremendous efforts and exhaustive interrogations of Godel’s times and places, acquaintances and offices, can only leave us, at the end, with an immeasurably enriched version of Gödel the wise child. It’s an undeniably distracting and reductive picture. But — and this is the trouble — it’s not wrong.

Snowflake science

Watching Noah Hutton’s documentary In Silico for New Scientist, 19 May 2021

Shortly after he earned a neuroscience degree, young filmmaker Noah Hutton fell into the orbit of Henry Markram, an Israeli neuroscientist based at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland.

Markram models brains, axon by axon, dendrite by dendrite, in all their biological and chemical complexity. His working assumption is that the brain is an organ, and so a good enough computer model of the brain ought to reveal its workings and pathologies, just as “in silico” models of the kidneys, spleen, liver and heart have enriched our understanding of those organs.

Markram’s son Kai has autism, so Markram has skin in this game. Much as we might want to improve the condition of people like Kai, no one is going to dig about in a living human brain to see if there are handy switches we can throw. Markram hopes a computer model will offer an ethically acceptable route to understanding how brains go wrong.

So far, so reasonable. Only in 2005, Henry Markram said he would build a working computer model of the human brain in 10 years.

Hutton has interviewed Markram, his colleagues and his critics, every year for well over a decade, as the project expanded and the deadline shifted. Markram’s vision transfixed purseholders across the European Union: in 2013 his Blue Brain Project won a billion Euros of public funding to create the Human Brain Project in Geneva.

And though his tenure did not last long, Markram is hardly the first founder to be wrested from the controls of his own institute, and he won’t be the last. There have been notable departures, but his Blue Brain Project endures, still working, still modelling: its in silico model of the mouse neocortex is astounding to look at.

Perhaps that is the problem. The Human Brain Project has become, says Hutton, a special-effects house, a shrine to touch-screens, curve-screens, headsets, but lacking any meaning to anything and anyone “outside this glass and steel building in Geneva”.

We’ve heard criticisms like this before. What about the way the Large Hadron Collider at CERN sucks funding from the rest of physics? You don’t have to scratch too deeply in academia to find a disgruntled junior researcher who’ll blame CERN for their failed grant application.

CERN, however, gets results. The Human Brain Project? Not so much.

The problem is philosophical. It is certainly within our power to model some organs. The brain, however, is not an organ in the usual sense. It is, by any engineering measure, furiously inefficient. Take a look: a spike in the dentrites releases this neurotransmitter, except when it releases that neurotransmitter, except when it does nothing at all. Signals follow this route, except when they follow that route, except when they vanish. Brains may look alike, and there’s surely some commonality in their working. At the level of the axon, however, every brain behaves like a beautiful and unique snowflake.

The Blue Brain Project’s models generate noise, just like regular brains. Someone talks vaguely about “emergent properties” — an intellectual Get Out of Jail Free card if ever there was one. But since no-one knows what this noise means in a real brain, there’s no earthly way to tell if Project’s model is making the right kind of noise.

The Salk Institute’s Terrence Sejnowski reckons the whole caper’s a bad joke; if successful Markram will only generate a simulation “every bit as mysterious as the brain itself”.

Hutton accompanies us down the yawning gulf between what Markram may reasonably achieve, and the fantasies he seems quite happy to stoke in order to maintain his funding. It’s a film made on a budget of nothing, over years, and it’s not pretty. But Hutton (whose very smart sf satire Lapsis came out in the US last month) makes up for all that with the sharpest of scripts. In Silico is a labour of love, rather more productive, I fear, than Markram’s own.

Waiting for the End of the End of the World

Watching the 2021 European Media Arts Festival on-line for New Scientist, 19 May 2021

For over forty years, the European Media Art Festival in Osnabrueck has offered attendees a glimpse of the best short films coming on-line and to festivals over the coming year. It’s been a reliable cultural barometer, too, revealing, through film, some of our deepest social anxieties and preoccupations. This year saw science fiction swallowing the festival whole.

It’s as though the genre were becoming, not just a valid way to talk about the present, but the only way.

This was the quite explicit message of the audiovisual presentation Planet City and the Return of Global Wilderness  by London-trained, LA-based architect Liam Young, much of whose work is speculative — not to say downright science-fictional. Part of Young’s presentation was a retrospective of a career spent exploring global infrastructures, “an unevenly-distributed megastructure that hides in plain sight… slowly stitched together from stolen lands by planetary logistics.”

Forming a powerful contrast with his past travels — through container shipping, the garment supply chain, lithium mining and other real-world adventures — Planet City also featured a utopian future in which humanity sagely withdraws “into one hyper-dense metropolis housing the entire population of the Earth”.

It’s the impossibility of this utopia that’s Young’s point. Science fiction used to be full of such utopian possibilities. These days, however, it has become, Young says, just our favourite way of explaining to ourselves, over and over, the disasters engulfing us and our planet. The once hopeful genre of science fiction cedes ground to dystopia, leaving us “stranded in the long now… waiting for the end of the End of the World”.

We’ve confronted the End of the World before, of course. Marian Mayland’s film essay Michael Ironside and I  weaves between three imaginary rooms, assembled from still and short clips from three iconic science fiction films. The rooms are uninhabited, cluttered, uncanny, and cut together to create an imaginary habitation connected to the outside world via shafts and closet doors. War Games’s bedroom in a suburban family house (1983), Real Genius’s California campus dorm room (1985) and the bowels of Sea Quest DSV’s futuristic nuclear submarine (1993) fold into each other to create a poignant fictional 1990s childhood, capturing the effects of Cold War thinking on a generation of geeky male adolescents.

Mayland’s film, which won a German film critics’ award at the festival, is exactly the sort of work — moving between film and performance, document and experiment — that the festival has been championing for over forty years.

Other science-fictional experiments included Josh Weissbach’s A Landscape to be Invented, a collage of wobbly 16mm and Super 8 footage set to excerpts of audiobook sci-fi from the likes of Kim Stanley Robinson and Cixin Liu. It’s a kind of “how to” manual for terraforming a distant world, only this world is not verdant, but violet, not green but purple, as Weissbach passes his footage through a digital, faux-ultraviolet filter.

Zachary Epcar’s more obviously satirical The Canyon sees the calm pace of life in a sunny waterside housing estate turn increasingly strange, as the blissed-out, evesdropped lines of the inhabitants (“Sometimes I come to in the glassware aisle, and I don’t know how I got there”) give way to the meaningless electronic gabble and vibration of phones, toothbrushes and keyfobs.

If this all sounds rather grim, rather unsmiling, even rather hopeless — well, I don’t think the selection, or even the works themselves, were to blame. I think Young is right and the problem lies in science fiction itself: that it’s ceased to be a playground, and has become instead a deadly serious way of explaining increasingly interconnected and technological world. And that’s fine. That’s science fiction growing up.

But what the artist-filmmakers of EMAF have yet to find, is some other way — less technocratic, perhaps, and more political, more spiritual — for imagining a better future.

Life at all costs

Reading The Next 500 Years by Chris Mason for New Scientist, 12 May 2021

Humanity’s long-term prospects don’t look good. If we don’t all kill each other with nuclear weapons, that overdue planet-killing asteroid can’t be too far off; anyway, the Sun itself will (eventually) explode, obliterating all trace of life in our planetary system.

As if awareness of our own mortality hasn’t given us enough to fret about, we are also capable of imagining our own species’ extinction. Once we do that, though, are we not ethically bound to do something about it?

Cornell geneticist Chris Mason thinks so. “Engineering,” he writes, “is humanity’s innate duty, needed to ensure the survival of life.” And not just human life; Mason is out to ensure the cosmic future of all life, including species that are currently extinct.

Mason is not the first to think this way, but he arrives at a fascinating moment in the history of technology, when we may, after all be able to avoid some previously unavoidable catastrophes.

Mason’s 500-year plan for our future involves reengineering human and other genomes so that we can tolerate the (to us) extreme environments of other worlds. Our ultimate goal, Mason says, should be to settle new solar systems.

Spreading humanity to the stars would hedge our bets nicely, only we currently lack the tools to survive the trip, never mind the stay. That’s where Mason comes in. He was principal investigator on NASA’s Twins Study, begun in 2015: a foundational investigation into the health of identical twins Scott Kelly and Mark Kelly during the 340 days Scott was in space and Mark was on Earth.

Mason explains how the Twins Study informed NASA’s burgeoning understanding of the human biome, how a programme once narrowly focused on human genetics now extends to embrace bacteria and viruses, and how new genetic engineering tools like CRISPR and its hopeful successors may enable us to address the risks of spaceflight (exposure to cosmic radiation radiation is considered the most serious) and protect the health of settlers on the Moon, on Mars, and even, one day, on Saturn’s moon Titan.

Outside his specialism, Mason has some fun (a photosythesizing human would need skin flaps the size of two tennis courts — so now you know) then flounders slightly, reaching for familiar narratives to hold his sprawling vision together. More informed readers may start to lose interest in the later chapters. The role of spectroscopy in the detection of exoplanets is certainly relevant, but in a work of this gargantuan scope, I wonder if it needed rehearsing. And will readers of a book like this really need reminding of Frank’s Drake equation (regarding the likelihood of extra-terrestrial civilisations)?

Uneven as it is, Mason’s book is a genuine, timely, and very personable addition to a 1,000-year-old Western tradition, grounded in religious expectations and a quest for transcendence and salvation. Visionaries from Isaac Newton to Joseph Priestley to Russian space pioneer Konstantin Tsiolkowsky have spouted the very tenets that underpin Mason’s account: that the apocalypse is imminent; and that, by increasing human knowledge, we may recover the Paradise we enjoyed before the Flood.

Masonic beliefs follow the same pattern; significantly, many famous NASA astronauts, including John Glenn, Buzz Aldrin and Gordo Cooper, were Freemasons.

Mason puts a new layer of flesh on what have, so far, been some ardent but very sketchy dreams. And, though a proud child of his engineering culture, he is no dupe. He understands and explores all the major risks associated with genetic tinkering, and entertains all the most pertinent counter-arguments. He knows where 19th-century eugenics led. He knows the value of biological and neurological diversity. He’s not Frankenstein. His deepest hope is not that his plans are realised in any recognisable form; but that we continue to make plans, test them and remake them, for the sake of all life.

Return From The Stars

Here’s my foreword to the new MIT edition of Stanislaw Lem’s Return from the Stars.

For a while I lived on the penthouse level of a gherkin-shaped glass tower, five minutes from DIFC, Dubai’s international financial centre. I’d spend the day perched at my partner’s kitchen counter, or sprawled in her hammock, writing long-hand into a green notebook. In the evening I would stand at the glass wall, stir-crazy by then, fighting an ice-cream headache brought on by the air conditioning, counting lime-green Lamborghinis crawling nose-to-tail along the six lanes of Al Saada Street, far below.

Once the sun bled out into the mist, I would leave the tower and pick my way over broken and unfinished pavements to the DIFC. There are several ways into the complex; none of them resemble civic space. A narrow, wood-panelled atrium led to the foot of a single, slow escalator. It deposited me on a paved mezzanine from which several huge towers grew. There weren’t any pavements as such, just stretches of empty space between chairs and tables. There were art galleries and offices and a destination restaurant run by a car company, each unit folding into the next, giving me at one moment the feeling of exhilaration, as though this entire place was mine for the taking, the next a sense of tremendous awkwardness as though I’d stumbled uninvited into an important person’s private party.

I couldn’t tell from the outside which mirrored door led to a mall, which to a reception desk, which to a lobby. I would burst into every building with an aggressive confidence, penetrating as far as I could into these indeterminate spaces until I found an exit — though an exit into what, exactly? Another atrium. A bank. A water feature. A beauty salon. Eventually I would be stopped and politely redirected.


Of middling years, the space explorer Hal Bregg is having to contend with two kinds of ageing. The first is historical. The other, biological.

He has just returned to Earth after a gap of 127 years, and the place he once called home has turned incomprehensibly strange in the intervening time. Thanks to the dilation effect brought on by interstellar travel, only ten ship-board years have passed. But this interval has been hard on Bregg. The trauma and beauty of his harrowing missions aboard the Prometheus still haunt him, and power some of the Return’s most affecting passages.

Bregg’s arrival on Earth, fresh from acclimatisation sessions on the Moon, is about as exciting as a bus ride. At the terminal there are no parades, no gawkers, no journalists. He’s a curiosity at best, and not even a rare one (there have been other ships, other crews). Arriving at the terminus, he’s just another passenger. He misses the official sent to meet him, decides he’ll find his own way about, sets off for the city, and becomes quickly lost. He can’t find the edge of the terminus. He can’t find the floor he needs. Everything seems to be rising. Is this a giant elevator? “It was hard to rest the eye on anything that was not in motion, because the architecture on all sides appeared to consist in motion along, in change…” He decides to follow the crowd. Is this a hotel? Are these rooms? People are fetching objects from the walls. He does the same. In his hands is a coloured translucent tube, slightly warm. Is this a room key? A map? Food?

“And suddenly I felt like a monkey that has been given a fountain pen or a lighter; for an instant I was seized by a blind rage.”

Among Stanislaw Lem’s many gifts is his understanding of how the future works. The future is not a series of lucky predictions (though Lem was luckier than most at that particular craps table). The future is not wholly exotic; it is, by necessity, still chock full of the past. The future is not one place, governed by one idea. Suppose you get a handle on some major change (Return from the Stars boasts one of those — but we’ll get to “betrization” in a minute). This insight won’t give you some mysterious, magical insight into everything else. You’ll be armed, but you’ll still be lost. The future is bigger than you think.

The point about the future is that it’s unreadable. You recognise the language, but you can no longer speak it. The words fell out of order years ago. The vowels shifted.

“I was numb from the strain of trying not to do anything wrong. This, for four days now. From the very first moment I was invariably behind in everything that went on, and the constant effort to understand the simplest conversation or situation turned that tension into a feeling horribly like despair. ”

Bregg contemplates the horrid beauty of the terminal (yes, he’s still in the terminal. Settle in: he’s going to be wandering that terminal for a very long time) and admires (if that is quite the word) its “coloured galaxies of squares, clusters of spiral lights, glows shimmering above skyscrapers, the streets: a creeping peristalsis with necklaces of light, and over this, in the perpendicular, cauldrons of neon, feather crests and lightning bolts, circles. airplanes, and bottles of flame, red dandelions made of needle-signal lights, momentary suns and haemorrhages of advertising, mechanical and violent.”

But if you think Lem will stop there, in contemplation of an overdriven New York skyline, then you need to read more Lem. The window gutters out. “You have been watching clips from newsreels of the seventies,” the air announces, “in the series Views of the Ancient Capitals.”


Science fiction delights in expensive real-estate. Sky-high homes give its protagonists the geostationary perspective they require to scry large amounts of weird information very quickly. In cinema, Rick Deckard and his replicant protege K are both penthouse dwellers. And in J G Ballard’s High Rise (1975) Robert Laing eats roast dog on the balcony of, yes, a 25th-floor apartment.

The world of Return from the Stars is altogether more socially progressive. Its buildings are only partly real, Their continuation is an image, “so that the people living on each level do not feel deprived. Not in any way.” The politics of this future, which teaches its infant children “the principles of tolerance, coexistence, respect for other beliefs and attitudes,” are disconcertingly familiar, and mostly admirable.

But let’s stay with the media for a moment. This is a future adept at immersing its denizens in a thoroughly mediated environment — thus an enhanced movie becomes a “real” in Michael Kandel’s savvy, punning translation. For those who know the canon, there’s a delicious hint here of what’s to come: in Lem’s The Futurological Congress (1971) 29 billion people live out lives of unremitting squalor, yet each thinks they’re a tycoon, wrapt as they are in drug-induced hallucination.

The Return’s future, by contrast, handles the physical world perfectly well, thank you. Its population are not narcotised. Not at all: they’re full of good ideas, hold down rewarding jobs, maintain close friendships, enjoy working marriages, and nurture happy children. They have all manner of things to live for. This future — “a world of tranquility, of gentle manners and customs, easy transitions, undramatic situations” — is anything but a dystopia. Why, it’s not even dull!


An operation, “betrization”, conducted in early childhood, renders people incapable of serious violence. Murder becomes literally inconceivable, and as a side-effect, risk loses its allure. Betrization has been universally adopted across the Earth, and it’s mandatory.

Bregg and his ancient fellows cannot help but view betrization with horror. Consequently, no-one has the stomach to force it upon them. This leaves the returning astronauts as predators in fields full of friendly, intelligent, accommodating sheep.

But why would Hal Bregg want to predate? Why would any of them? What would it gain them, beyond a spoiled conscience? Everyone on this contented Earth assumes the returnees are savages, though most are far too polite (or risk-averse) to say so. For Bregg and his fellows, it’s enraging. It’s alienating. It’s a prompt to the sort of misbehaviour that they would otherwise never dream of indulging.

Lem himself considered Return from the Stars a failure, and blamed betrization for it. As an idea, it was too on-the-nose: a melodramatic conceit that he had to keep underplaying so the story — a quiet affair about friendship and love and misunderstanding — would stay on track. But times and customs change, and history has been kind to the Return. It has become, in 2020, a better book than it was in the 1960s. This is because we have grown into the very future it predicted. Indeed, we are embroiled in precisely the kind of cultural conflict Lem said would ensue, once betrization was invented.

“Young people, betrizated, became strangers to their own parents, whose interests they did not share. They abhorred their parents’ bloody tastes. For a quarter of a century is was necessary to have two types of periodicals, books, plays: one for the old generation, one for the new.”

Western readers with experience of university life and politics over the last thirty years will surely recognise themselves in these pages, where timid passions dabble with free love, where thin skins heal in safe spaces, and intellectual gadflies navigate a landscape of extreme emotional delicacy, under constant threat of cancellation.

Will they blush, to see themselves thus reflected? I doubt it. Emulsify them as you like, kindness and a sense of humour do not mix. Anyway, the Return is not a satire, any more than it is a dystopia. It is not, when push comes to shove, a book about a world at all (for which some may read: not science fiction).

It is a book about Hal Bregg. About his impulse towards solitude and his need for company. About his deep respect for old friends, and his earnest desire for new ones. It’s about a kind and thoughtful bull in an emotional china shop, trying desperately not to rape things. It’s about men.


We would meet at last and order a cab and within the hour we would be sitting overlooking the Gulf in a bar fashioned to resemble the hollowed interior of a golden nugget. I remember one evening my partner chose what to order and I was handed a glass of a colourless liquid topped with a film of crude oil. I thought: Do I drink this or do I light it? And suddenly I felt like a monkey that has been given a fountain pen or a lighter; for an instant I was seized by a blind rage.

That was the evening she told me why she didn’t want to see me any more. The next day I left for London. 19 March 2017: World Happiness Day. As we turned north for the airport I noticed that the sign at the junction had changed. In place of Al Saada: “Happiness Street.”

And a ten-storey-high yellow smiley icon had been draped across the face of a government building.

To hell with the philosopause!

Reading Hawking Hawking: The Selling of a Scientific Celebrity by Charles Seife for the Spectator, 1 May 2021

I could never muster much enthusiasm for the theoretical physicist Stephen Hawking. His work, on the early universe and the nature of spacetime, was Nobel-worthy, but those of us outside his narrow community were horribly short-changed. His 1988 global best-seller A Brief History of Time was incomprehensible, not because it was difficult, but because it was bad.

Nobody, naturally, wanted to ascribe Hawking’s popular success to his rare form of Motor Neurone Disease, Hawking least of all. He afforded us no room for horror or, God forbid, pity. In 1990, asked a dumb question about how his condition might have shaped his work (because people who suffer ruinous, debilitating illnesses acquire compensating superpowers, right?) Hawking played along: “I haven’t had to lecture or teach undergraduates, and I haven’t had to sit on tedious and time-consuming committees. So I have been able to devote myself completely to research.”

The truth — that Hawking was one of the worst popular communicators of his day — is as evident as it is unsayable. A Brief History of Time was incomprehensible because after nearly five years’ superhuman effort, the author proved incapable of composing a whole book unaided. He couldn’t even do mathematics the way most people do it, by doodling, since he’d already lost the use of his hands. He could not jot notes. He could not manipulate equations. He had to turn every problem he encountered into a species of geometry, just to be able to think about it. He held his own in an impossibly rarified profession for years, but the business of popular communication was beyond him. As was communication, in the end, according to Hawking’s late collaborator Andy Strominger: “You would talk about words per minute, and then it went to minutes per word, and then, you know, it just got slower and slower until it just sort of stopped.”

Hawking became, in the end, a computerised patchwork of hackneyed, pre-stored utterances and responses. Pull the string at his back and marvel. Charles Seife, a biographer braver than most, begins by staring down the puppet. His conceit is to tell Stephen Hawking’s story backwards, peeling back the layers of celebrity and incapacity to reveal the wounded human within.

It’s a tricksy idea that works so well, you wonder why no-one thought of it before (though ordering his material and his arguments in this way must have nearly killed the poor author).

Hawking’s greatest claim to fame is that he discovered things about black holes — still unobserved at that time — that set the two great schools of theoretical physics, quantum mechanics and relativity, at a fresh and astonishingly creative loggerheads.

But a new golden era of astronomical observation dawned almost immediately after, and A Brief History was badly outdated before it even hit the shelves. It couldn’t even get the date of the universe right.

It used to be that genius that outlived its moment could reinvent itself. When new-fangled endocrine science threw Ivan Pavlov’s Nobel-winning physiology into doubt, he reinvented himself as a psychologist (and not a bad one at that).

Today’s era of narrow specialism makes such a move almost impossible but, by way of intellectual compensation, there is always philosophy — a perennially popular field more or less wholly abandoned by professional philosophers. Images of the middle-aged scientific genius indulging its philosopause in book after book about science and art, science God, science and society and so on and so forth, may raise a wry smile, but work of real worth has come out of it.

Alas, even if Hawking had shown the slightest aptitude for philosophy (and he didn’t), he couldn’t possibly have composed it.

In our imaginations, Hawking is the cartoon embodiment of the scientific sage, effectively disembodied and above ordinary mortal concerns. In truth, life denied him a path to sagacity even as it steeped him in the spit and stew of physical being. Hawking’s libido never waned. So to hell with the philosopause! Bring on the dancing girls! Bring on the cheques, from Specsavers, BT, Jaguar, Paddy Power. (Hawking never had enough money: the care he needed was so intensive and difficult, a transatlantic air flight could set him back around a quarter of a million pounds). Bring on the billionaires with their fat cheques books (naifs, the lot of them, but decent enough, and generous to a fault). Bring on the countless opportunities to bloviate about subjects he didn’t understand, a sort of Prince Charles only without Charles’s efforts at warmth.

I find it impossible, having read Seife, not to see Hawking through the lens of Jacobean tragedy, warped and raging, unable even to stick a finger up at a world that could not — but much worse, *chose* not — to understand him. Of course he was a monster, and years too late, and through a book that will anger many, I have come to love him for it.

Bacon is grey

Reading Who Poisoned Your Bacon Sandwich? by Guillaume Coudray and Hooked: How processed food became addictive by Michael Moss for the Financial Times, 21 April 2021

The story of how food arrives on our plate is a living, breathing sci-fi epic. Fertiliser produced by sucking nitrogen out of the air now sustains about half the global population. Farmers worldwide depend on the data spewing from 160 or so environmental satellite missions in low-earth orbit, not to mention literally thousands of weather satellites of various kinds.

That such a complex system is precarious hardly needs saying. It only takes one innovative product, or cheeky short-cut, to transform the health, appearance and behaviour of nations. What gastronomic historian, 50 years ago, would have ptedicted that China would grow fat, or that four and out five French cafés would shut up shop in a single generation?

To write about the food supply is to wrestle with problems of scale, as two new books on the subject demonstrate. To explain how we turned the green revolution of the 1960s into a global obesity pandemic in less than half a century, Michael Moss must reach beyond history entirely, and into the contested territories of evolutionary biology. Guillaume Coudray, Paris-based investigative journalist, prefers a narrower argument, focusing on the historical accidents, and subsequent cover-ups that even now add cancer-causing compounds to our processed meat. The industry attitudes and tactics he reveals strongly resemble those of the tobacco industry in the 1970s and 1980s.

Ably translated as Who Poisoned Your Bacon Sandwich?, Coudray’s 2017 expose tells the story of the common additives used to cure — and, crucially, colour — processed meats. Until 1820, saltpetre (potassium nitrate; a constituent of gunpowder) was our curing agent of choice — most likely because hunters in the 16th century discovered that game birds shot with their newfangled muskets kept for longer. Then sodium nitrate appeared, and — in the mid 1920s — sodium nitrite. All three give meats a convincing colour in a fraction of the time traditional salting requires. Also, their disinfectant properties allow unscrupulous producers to operate in unsanitary conditions.

Follow basic rules of hygiene, and you can easily cure meat using ordinary table salt. But traditional meats often take upwards of a year to mature; no wonder that the 90-day hams pouring out of Chicago’s meatpacking district at the turn of the 20th century conquered the world market. Parma ham producers still use salt; most everyone else has resorted to nitrate and nitrates just to survive.

It wasn’t until the 1970s that researchers found a link between these staple curing agents and cancer. This was, significantly, also the moment industry lobbyists began to rewrite food history. The claim that we’ve been preserving meat with saltpetre for over 5,000 years is particularly inventive: back then it was used to preserve Egyptian mummies, not cure hams. Along with the massaged history came obfuscation – for instance arguments that nitrates and nitrites are not carcinogenic in themselves, even if they give rise to carcinogenic agents during processing, cooking or, um, digestion.

And when, in 2015, experts of the International Agency for Research on Cancer classified all processed meats in “group 1: carcinogenic to humans” (they can cause colorectal cancer, the second most deadly cancer we face) the doubt-mongers redoubled their efforts — in particular the baseless claim that nitrates and nitrites are our only defence against certain kinds of food poisoning.

There are alternatives. If it’s a disinfectant-cum-curing agent you’re after, organic water-soluble salts called sorbates work just fine.

Crucially, though, sorbates have no colourant effect, while nitrates and nitrites give cured meat that rosy glow. Their use is so widespread, we have clean forgotten that the natural colour of ham, pate, weiner sausages and bacon is (deal with it) grey.

That the food industry wants to make food as attractive as possible, so that it can sell as much as possible is, of itself, hardly news.

And in Hooked (a rather different beast to his 2013 exposé Sugar Salt Fat), American journalist Michael Moss finds that — beyond the accusations and litigations around different foodstuffs — there’s something systemically wrong with our relationship to food. US consumers now fill three-quarters of their shopping carts with processed food. Snacking now accounts for around a quarter of our daily calorie intake. Pointing the finger at Coca-Cola or McDonalds is not going to solve the bigger problem, which Moss takes to be changes in the biology of our ancestors which have made it extremely difficult to recoup healthy eating habits once they’ve run out of control.

Moss argument is cogent, but not simple. We have to get to grips, first, with the latest thinking on addiction, which has more or less dispensed with the idea that substances are mind-altering. Rather, they are mind-engaging, and the speed of their effect has quite as much, if not more to do with their strength than their pharmacology.

By this measure, food is an incredibly powerful drug (A taste of sugar hits the brain 20 times faster than a lungful of tobacco smoke). But does it make any sense to say we’re all addicted to food?

Moss says it does — only we need to dip our toes in evolutionary biology to understand why. As primates, we have lost a long bone, called the transverse lamina, that used to separate the mouth from the nose. Consequently, we can smell food as we taste it.

No one can really explain why an enhanced appreciation of flavour gave us such a huge evolutionary advantage, but the biology is ungainsayable: we are an animal obsessed with gustatory variety. In medieval France, this inspired hundreds of different sauces. Today, in my local supermarket, it markets 50-odd different varieties of potato chip.

The problem, Moss says, is not that food manufacturers are trying to addict us. It is that they have learned how to exploit an addiction baked into our biology.

So what’s the solution? Stop drinking anything with calories? Avoid distractions when we eat? Favour foods we have to chew? All of the above, of course — though it’s hard to see how good advice on its own could ever persuade us all to act against our own appetites.

Hooked works, in a rambunctious, shorthand sort of way. Ultimately, though, it may prove be a transitional book for an author who is edging towards a much deeper reappraisal of the relationship between food, convenience (time, in other words), and money.

Neither Moss nor Coudray demands we take to the barricades just yet. But the pale, unappetisingly grey storm clouds of a food revolution are gathering.