One of those noodly problems

Reading The Afterlife of Data by Carl Öhman for the Spectator

They didn’t call Diogenes “the Cynic” for nothing. He lived to shock the (ancient Greek) world. When I’m dead, he said, just toss my body over the city walls to feed the dogs. The bit of me that I call “I” won’t be around to care.

The revulsion we feel at this idea tells us something important: that the dead can be wronged. Diogenes may not care what happens to his corpse, but we do. And doing right by the dead is a job of work. Some corpses are reduced to ash, some are buried, and some are fed to vultures. In each case the survivors all feel, rightly, that they have treated their loved ones’ remains with respect.

What should we do with our digital remains?

This sounds like one of those noodly problems that keep digital ethicists like Öhman in grant money — but some of the stories in The Afterlife of Data are sure to make the most sceptical reader stop and think. There’s something compelling, and undeniably moving, in one teenager’s account of how, ten years after losing his father, he found they could still play together; at least, he could compete against his dad’s last outing on an old XBox racing game.

Öhman is not spinning ghost stories here. He’s not interested in digital afterlives. He’s interested in remains, and in emerging technologies that, from the digital data we inadvertently leave behind, fashion our artificially intelligent simulacra. (You may think this is science fiction, but Microsoft doesn’t, and has already taken out several patents.)

This rapidly approaching future, Öhman argues, seems uncanny only because death itself is uncanny. Why should a chatty AI simulacrum prove any more transgressive than, say, a photograph of your lost love, given pride of place on the mantelpiece? We got used to the one; in time we may well get used to the other.

What should exercise us is who owns the data. As Öhman argues, ‘if we leave the management of our collective digital past solely in the hands of industry, the question “What should we do with the data of the dead?” becomes solely a matter of “What parts of the past can we make money on?”’

The trouble with a career in digital ethics is that however imaginative and insightful you get, you inevitably end up playing second-fiddle to some early episode of Charlie Brooker’s TV series Black Mirror. The one entitled “Be Right Back”, in which a dead lover returns in robot form to market upgrades of itself to the grieving widow, stands waiting at the end of almost every road Öhman travels here.

Öhman reminds us that the digital is a human realm, and one over which we can and must and must exert our values. Unless we actively delete them (in a sort of digital cremation, I suppose) our digital dead are not going away, and we are going to have to accommodate them somehow.

A more modish, less humane writer would make the most of the fact that recording has become the norm, so that, as Öhman puts it, “society now takes place in a domain previously reserved for the dead, namely the archive.” (And, to be fair, Öhman does have a lot of fun with the idea that by 2070, Facebook’s dead will outnumber its living.)

Ultimately, though, Öhman draws readers through the digital uncanny to a place of responsibility. Digital remains are not just a representation of the dead, he says, “they are the dead, an informational corpse constitutive of a personal identity.”

Öhman’s lucid, closely argued foray into the world of posthumous data is underpinned by this sensible definition of what constitute a person: “A person,” he says, “is the narrative object that we refer to when speaking of someone (including ourselves) in the third person. Persons extend beyond the selves that generate them.” If I disparage you behind your back, I’m doing you a wrong, even though you don’t know about it. If I disparage you after you’re dead, I’m still doing you wrong, though you’re no longer around to be hurt.

Our job is to take ownership of each others’ digital remains and treat them with human dignity. The model Öhman holds up for us to emulate is the Bohemian author and composer Max Brod, who had the unenviable job of deciding what to do with manuscripts left behind by his friend Franz Kafka, who wanted him to burn them. In the end Brod decided that the interests of “Kafka”, the informational body constitutive of a person, overrode (barely) the interests of Franz his no-longer-living friend.

What to do with our digital remains? Öhman’s excellent reply treats this challenge with urgency, sanity and, best of all, compassion. Max Brod’s decision wasn’t and isn’t obvious, and really, the best you can do in these situations is to make the error you and others can best live with.

Geometry’s sweet spot

Reading Love Triangle by Matt Parker for the Telegraph

“These are small,” says Father Ted in the eponymous sitcom, and he holds up a pair of toy cows. “But the ones out there,” he explains to Father Dougal, pointing out the window, “are far away.”

It may not sound like much of a compliment to say that Matt Parker’s new popular mathematics book made me feel like Dougal, but fans of Graham Linehan’s masterpiece will understand. I mean that I felt very well looked after, and, in all my ignorance, handled with a saint-like patience.

Calculating the size of an object from its spatial position has tried finer minds than Dougal’s. A long virtuoso passage early on in Love Triangle enumerates the half-dozen stages of inductive reasoning required to establish the distance of the largest object in the universe — a feature within the cosmic web of galaxies called The Giant Ring. Over nine billion light years away, the Giant Ring still occupies 34.5 degrees of the sky: now that’s what I call big and far away.

Measuring it has been no easy task, and yet the first, foundational step in the calculation turns out to be something as simple as triangulating the length of a piece of road.

“Love Triangle”, as no one will be surprised to learn, is about triangles. Triangles were invented (just go along with me here) in ancient Egypt, where the regularly flooding river Nile obliterated boundary markers for miles around and made rural land disputes a tiresome inevitability. Geometry, says the historian Herodotus around 430 BC, was invented to calculate the exact size of a plot of land. We’ve no reason to disbelieve him.

Parker spends a good amount of time demonstrating the practical usefulness of basic geometry, that allows us to extract the shape and volume of triangular space from a single angle and the length of a single side. At one point, on a visit to Tokyo, he uses a transparent ruler and a tourist map to calculate the height of the city’s tallest tower, the SkyTree.

Having shown triangles performing everyday miracles, he then tucks into their secret: “Triangles,” he explains, “are in the sweet spot of having enough sides to be a physical shape, while still having enough limitations that we can say generalised and meaningful things about them.” Shapes with more sides get boring really quickly, not least because they become so unwieldy in higher dimensions, which is where so many of the joys of real mathematics reside.

Adding dimensions to triangles adds just one corner per dimension. A square, on the other hand, explodes, doubling its number of corners with each dimension. (A cube has eight.) This makes triangles the go-to shape for anyone who wants to assemble meshes in higher dimensions. All sorts of complicated paths are brought within computational reach, making possible all manner of civilisational triumphs, including (but not limited to) photorealistic animations.

So many problems can be cracked by reducing them to triangles, there is an entire mathematical discipline, trigonometry, concerned with the relationships between their angles and side lengths. Parker’s adventures on the spplied side of trigonometry become, of necessity, something of a blooming, buzzing confusion, but his anecdotes are well judged and lead the reader seamlessly into quite complex territory. Ever wanted to know how Kathleen Lonsdale applied Fourier transforms to X-ray waves, making possible Rosalind Franklin’s work on DNA structure? Parker starts us off on that journey by wrapping a bit of paper around a cucumber and cutting it at a slant. Half a dozen pages later, we may not have the firmest grasp of what Parker calls the most incredible bit of maths most people have never heard of, but we do have a clear map of what we do not know.

Whether Parker’s garrulousness charms you or grates on you will be a matter of taste. I have a pious aversion to writers who feel the need to cheer their readers through complex material every five minutes. But it’s hard not to tap your foot to cheap music, and what could be cheaper than Parker’s assertion that introducing coordinates early on in a maths lesson “could be considered ‘putting Descartes before the course’”?

Parker has a fine old time with his material, and only a curmudgeon can fail to be charmed by his willingness to call Heron’s two-thousand-year-old formula for finding the area of a triangle “stupid” (he’s not wrong, neither) and the elongated pentagonal gyrocupolarotunda a “dumb shape”.

“For survival reasons, I must spread globally”

Reading Trippy by Ernesto Londono for the Telegraph

Ernesto Londoño’s enviable reputation as a journalist was forged in the conflict zones of Iraq and Afghanistan. In 2017 he landed his dream job as the New York Times Brazil bureau chief, with a roving brief, talented and supportive colleagues, and a high-rise apartment in Rio de Janeiro.

When, not long after, he nearly-accidentally-on-purpose threw himself off his balcony, he knew he was in serious emotional trouble.

It was more than whimsy that led him to look for help at a psychedelic retreat in the Amazon hamlet of Mushu Inu, a place with no running water, where the shower facility consisted of a large tub guarded by a couple of tarantulas. He had seen what taking antidepressant medications had done for acquaintances in the US military (nothing good), and thought to write at first hand about what, in the the US, has become an increasingly popular alternative therapy: drinking ayahuasca tea.

Ayahuasca is prepared by boiling chunks of an Amazonian vine called Banisteriopsis caapi with the leaves of a shrubby plant called Psychotria viridis. The leaves contain a psychoactive compound, and the vines stop the drinker from metabolising it too quickly. The experience that follows is, well, trippy.

By disrupting routine patterns of thought and memory processing, psychedelic trips offer depressed and traumatised people a reprieve from their obsessive thought patterns. They offer them a chance to recalibrate and reinterpret past experiences. How they do this is up to them, however, and this is why psychedelics are anything but a harmless recreational drug. It’s as possible to step out of a bad trip screaming psychotically at the trees as it is to emerge, Buddha-like, from a carefully guided psychedelic experience. The Yawanawá people of the Amazon, who have effectively become global ambassadors for the brew (which, incidentally, they’ve only been making for a few hundred years) make no bones about its harmful potential. The predominantly western organisers of ayahuasca-fuelled tourist retreats are rather less forthcoming.

Psychedelics promise revolutionary treatments for PTSD. In the US, pharmaceutical researchers funded by government are attempting to subtract all the whacky, enjoyable and humane elements of the ayahuasca experience, and thereby distil a kind of aspirin for war trauma. It’s a singularly dystopian project, out to erase the affect of atrocities in the minds of those who might, thanks to that very treatment, be increasingly inclined to perpetrate them.

On one ayahausca webforum, meanwhile, the brew speaks to her counter-cultural acolytes. “If I don’t spread globally I will face extinction, similar to Humans,” a feminised ayahuasca cuppa proclaims. “For survival reasons, I must spread globally, while Humans must accept my sacred medicine to heal their afflicted soul.”

Londono has drunk the brew, if not the Kool-Aid, and says his ayahuasca experiences saved, if not his life, then at very least his capacity for happiness. He maintains a great affection for the romantics and idealists who he depicts in pursuit, according to their different lights, of the good and the healthful in psychedelic experience.

His own survey leads him from psychedelic “bootcamps” in the rainforest to upscale clinics in Costa Rica tending to the global one per cent, to US “churches”, who couch therapy as religious experience so that they can import ayahuasca and get around the strictures of the DEA. The most startling sections, for me, dealt with Santo Daime, a syncretic Brazilian faith that contrives to combine ayahuasca with a proximal Catholic liturgy.

Trippy is told, as much as possible, in the first person, through anecdote and memoir. Seeing the perils and the promise of psychedelic experience play out in Londono’s own mind, as he comes to terms over years with his own quite considerable personal traumas, is a privilege, though it brings with it moments of tedium, as though we were being expected to sit through someone’s gushing account of their cheese dreams. This — let’s call it the stupidity of seriousness — is a besetting tonal problem with the introspective method. William James fell foul of it in The Principles of Psychology of 1890, so it would be a bit rich of me to twit Londono about it in 2024.

Still, it’s fair to point out, I think, that Londono, an accomplished print journalist, is writing, day on day, for a readership of predominantly US liberals — surely the most purse-lipped and conservative readership on Earth. So maybe, with Trippy as our foundation, we should now seek out a looser, more gonzo treatment: one wild enough to handle the wholesale spiritual regearing promised by the psychedelics coming to a clinic, church, and holiday brochure near you.

 

“The most efficient conformity engines ever invented”

Reading The Anxious Generation by Jonathan Haidt for The Spectator, 30 March 2024

What’s not to like about a world in which youths are involved in fewer car accidents, drink less, and wrestle with fewer unplanned pregnancies?

Well, think about it: those kids might not be wiser; they might simply be afraid of everything. And what has got them so afraid? A little glass rectangle, “a portal in their pockets” that entices them into a world that’s “exciting, addictive, unstable and… unsuitable for children”.

So far, so paranoid — and there’s a delicious tang of the documentary-maker Adam Curtis about social psychologist Jonathan Haidt’s extraordinarily outspoken, extraordinarily well-evidenced diatribe against the creators of smartphone culture, men once hailed, “as heroes, geniuses, and global benefactors who,” Haidt says, “like Prometheus, brought gifts from the gods to humanity.”

The technological geegaw Haidt holds responsible for the “great rewiring” of brains of people born after 1995 is not, interestingly enough, the iPhone itself (first released in 2007) but its front-facing camera, released with the iPhone 4 in June 2010. Samsung added one to its Galaxy the same month. Instagram launched in the same year. Now users could curate on-line versions of themselves on the fly — and they do, incessantly. Maintaining an on-line self is a 24/7 job. The other day on Crystal Palace Parade I had to catch a pram from rolling into the street while the young mother vogued and pouted into her smartphone.

Anecdotes are one thing; evidence is another. The point of The Anxious Generation is not to present phone-related pathology as though it were a new idea, but rather to provide robust scientific evidence for what we’ve all come to assume is true: that there is causal link (not just some modish dinner-party correlation) between phone culture and the ever more fragile mental state of our youth. “These companies,” Haidt says, “have rewired childhood and changed human development on an almost unimaginable scale.”

Haidt’s data are startling. Between 2010 and 2015, depression in teenage girls and boys became two and a half times more prevalent. From 2010 to 2020, the rate of self-harm among young adolescent girls nearly tripled. The book contains a great many bowel-loosening graphs, with titles like “High Psychological Distress, Nordic Nations” and “Alienation in School, Worldwide”. There’s one in particular I can’t get out of my head, showing the percentage of US students in 8th 10th and 12th grade who said they were happy in themselves. Between 2010 and 2015 this “self-satisfaction score” falls off a cliff.

The Anxious Generation revises conclusions Haidt drew in 2018, while collaborating with the lawyer Greg Lukianoff on The Coddling of the American Mind. Subtitled “How good intentions and bad ideas are setting up a generation for failure”, that book argued that universities and other institutes of higher education (particularly in the US) were teaching habits of thinking so distorted, they were triggering depression and anxiety among their students. Why else would students themselves be demanding that colleges protect them from books and speakers that made them feel “unsafe”? Ideas that had caused little or no controversy in 2010 “were, by 2015, said to be harmful, dangerous, or traumatising,” Haidt remembers.

Coddling’s anti-safe-space, “spare the rod and spoil the child” argument had merit, but Haidt soon came to realise it didn’t begin to address the scale of the problem: “by 2017 it had become clear that the rise of depression and anxiety was happening in many countries, to adolescents of all educational levels, social classes and races.”

Why are people born after 1996 so — well — different? So much more anxious, so much more judgemental, so much more miserable? Phone culture is half of Haidt’s answer; the other is a broader argument about “safetyism”, which Haidt defines as “the well-intentioned and disastrous shift toward overprotecting children and restricting their autonomy in the ‘real world’.”

Boys suffer more from being shut in and overprotected. Girls suffer more from the way digital technologies monetize and weaponise peer hierarchies. Although the gender differences are interesting, it’s the sheer scale of harms depicted here that should galvanise us. Haidt’s suggested solutions are common sense and commonplace: stop punishing parents for letting their children have some autonomy. Allow children plenty of unstructured free play. Ban phones in school.

For Gen-Z, this all comes too late. Over-protection in the real world, coupled with an almost complete lack of protections in the virtual world, has consigned a generation of young minds to what is in essence a play-free environment. In the distributed, unspontaneous non-space of the digital device, every action is performed in order to achieve a prescribed goal. Every move is strategic. “Likes” and “comments”, “thumbs-up” and “thumbs-down” provide immediate real time metrics on the efficacy or otherwise of thousands of micro-decisions an hour, and even trivial mistakes bring heavy costs.

In a book of devastating observations, this one hit home very hard: that these black mirrors of ours are “the most efficient conformity engines ever invented”.

The Penguins of Venus

Reading Our Accidental Universe by Chris Lintott for the Telegraph, 8 March 2024

Phosphine — a molecule formed by one phosphorous atom and three atoms of hydrogen — is produced in bulk only (and for reasons that are obscure) in the stomachs of penguins. And yet something is producing phosphine high in the clouds of Venus — and at just the height that conditions are most like those on the surface of the Earth. Unable to land (unless they wanted to be squished and fried under Venus’s considerable atmosphere), and armoured against a ferociously acidic atmosphere, the penguins of Venus haunt the dreams of every stargazer with an ounce of poetry in their soul.

Chris Lintott is definitely one of these. An astrophysicist at Oxford University and presenter of BBC’s The Sky at Night, Lintott also co-founded Galaxy Zoo, an online crowdsourcing project where we can volunteer our time, classifying previously unseen galaxies. The world might be bigger than we can comprehend and wilder than we can understand, but Lintott reckons our species’ efforts at understanding are not so shoddy, and can and should be wildly shared.

Our Accidental Universe is his bid to seize the baton carried by great popularisers like Carl Sagan and Patrick Moore: it’s an anecdotal tour of the universe, glimpsed through eccentric observations, tantalising mysteries, and discoveries stumbled upon by happenstance.

Lintott considers the possibilities for life outside the Earth, contemplates rocks visiting from outside the solar system, peers at the night sky with eyes tuned to radio and microwaves, and shakes a fist at the primordial particle fog that will forever obscure his view of the universe’s first 380,000 years.

Imagine if we lived in some globular star cluster: that spectacular night sky of ours would offer no visible hint of the universe beyond. We might very well imagine our neighbouring stars, so near and so bright, were the sum total of creation — and would get one hell of a shock once we got around to radio astronomy.

Even easier to imagine — given the sheer amount of liquid water that’s been detected already just within our own solar system brought above freezing by tidal effects on moons orbiting gas giant planets — we might have evolved in some lightless ocean, protected from space by a kilometres-thick icecap. What would we know of the universe then? Whatever goes on in the waters of moons like icy Enceladus, it’s unlikely to involve much astronomy.

As luck would have it, though, growing up on land, on Earth, has given us a relatively unobscured view of the entire universe. Once in 1995, so as to demonstrate a fix to its wonky optics, the operators of the Hubble Space Telescope pointed their pride and joy at (apparently) nothing, and got back a picture chock-full of infant galaxies.

Science is a push-me pull-you affair in which observation inspires theory, and theory directs further observation. Right now, the night sky is turning out to be much more various than we expected. The generalised “laws” we evolved in the last century to explain planet formation and the evolution of galaxies aren’t majorly wrong; but they are being superseded by the carnival of weird, wonderful, exceptional, and even, yes, accidental discoveries we’re making, using equipment unimaginable to an earlier generation. Several techniques are discussed here, but the upcoming Square Kilometre Array (SKA) takes some beating. Sprouting across southern Africa and western Australia, this distributed radio telescope, its components strung together by supercomputers, will, says Lintott, “be sensitive to airport radar working on any planet within a few hundred light-years”.

Observing the night sky with such tools, Lintott says, will be “less like an exercise in cerebral theoretical physics and more like reading history.”

Charming fantasies of space penguins aside (and “never say never” is my motto), there’s terror and awe to be had in Lintott’s little book. We scan the night sky and can’t help but wonder if there is more life out there — and yet we have barely begun to understand what life actually is. Lintott’s descriptions of conditions on the Jovian moon Titan — where tennis ball-sized drops of methane fall from orange clouds — suggest a chemistry so complex that reactions may be able to reproduce and evolve. “Is this chemical complexity ‘life’? he asks. “I don’t know.”

Neither do I. And if they ever send me on some First Contact mission amid the stars, I’m taking a bucket of fish.

“Spectacular, ridiculous, experimental things”

Reading The Tomb of the Mili Mongga by Samuel Turvey for New Scientist, 6 March 2024

Pity the plight of evolutionary biologist Samuel Turvey, whose anecdotal accounts of fossil hunting in a cave near the village of Mahaniwa, on the Indonesian island of Sumba, include the close attentions of “huge tail-less whip scorpions with sickening flattened bodies, large spiny grabbing mouthparts, and grotesquely thin and elongated legs”.

Why was a conservation biologist hunting for fossils? Turvey’s answer has to do with the dual evolutionary nature of islands.

On the one hand, says Turvey, “life does spectacular, ridiculous, experimental things on islands, making them endlessly fascinating to students of evolution.”

New Caledonia, a fragment of ancient Gondwana, boasts bizarre aquatic conifers and even shrubby parasitic conifers without any roots. Madagascar hosts a lemur called the aye-aye; a near primate equivalent to the woodpecker. But my personal favourite, in a book full of wonders, pithily described, is the now extinct cave goat Myotragus from predator-free Majorca and Menorca. Relieved of the need to watch its back, it evolved front-facing eyes, giving it the disconcerting appearance of a person wearing a goat mask.

But there is a darker side to island life: it’s incredibly vulnerable. The biggest killers by far are visitations of fast-evolving diseases. European exploration and colonisation between the 16th and 19th centuries decimated the human populations of Pacific archipelagos, as a first wave of dysentery was followed by smallpox, measles and influenza. Animals brought on the trip proved almost as catastrophic to the environment. Contrary to cliche, westerners on the island of Mauritius did not hunt the dodo to extinction; rats did. And let’s not forget Tibbles, the cat that’s said to have single-handedly (pawedly?) wiped out the Stephens Island wren, a tiny flightless songbird, in 1894.

There are lessons to be learned here, of course, but Turvey’s at pains to point out that islands are accidents waiting to happen. Islands are by their very nature sites of extinction. They may be treasure-troves of evolutionary innovation, but most of their treasures are already extinct. As for conserving their wildlife, Turvey wonders how, without a good understanding of the local fossil record, “we even define what constitutes a ‘natural’ ecosystem, or an objective restoration target to aim for”.

A tale of islands and their ephemeral wonders would alone have made for an arresting book, but Turvey, a more-than-able raconteur, can’t resist spicing up his account with tales of Sumba’s resident mythical wild-men, the “Mili Mongga”, who, it is said, used to build walls and help out with the ploughing — until their habit of stealing food got them all killed by the infuriated human population.

Why should we pay attention to such tales? Well, Sumba is only about 50 kilometres south of Flores, where a previously unknown (and, at just over a metre tall, ridiculously small) hominin was unearthed by an Australian team in 2003.

If there were hobbits on Flores, might there have been giants on Sumba? And might surviving mili mongga still be lurking in the forests?

Turvey uses the local island legends to launch fascinating forays into the island’s history and anthropology, to explain why large animals, fetched up on islands, grow smaller, while small animals grow larger, and also to have an inordinate amount of fun, largely at his own expense.

When one villager describes a mili mongga skull as being two feet long, and its teeth “as long as a finger”, “I got the feeling,” says Turvey, ”that there might now be some exaggeration going on.” Never say never, though: soon Turvey and his long-suffering team are following gamely along on missions up crags and past crocodile-infested swamps and into holes in the ground — sometimes where other visitors, from other villages, habitually go to relieve themselves.

“There was the cave that some village kids told us contained a human skull, which turned out to be a rotten coconut under some bat dung,” Turvey recalls. “There was the cave that was sacred, which seemed to mean that no one could remember exactly where it was.”

Turvey’s more serious explorations unearthed two new mammal genera (both ancestral forms of rat). It goes without saying, I should think, that they did not bring back evidence of a new hominin. But what’s not to enjoy about a tall tale, especially when it’s used to paint such a vivid and insightful portrait of a land and its people?

“We cannot save ourselves”

Interviewing Cixin Liu for The Telegraph, 29 February 2024

Chinese writer Cixin Liu steeps his science fiction in disaster and misfortune, even as he insists he’s just playing around with ideas. His seven novels and a clutch of short stories and articles (soon to be collected in a new English translation, A View from the Stars) have made him world-famous. His most well-known novel The Three-Body Problem won the Hugo, the nearest thing science fiction has to a heavy-hitting prize, in 2015. Closer to home, he’s won the Galaxy Award, China’s most prestigious literary science-fiction award, nine times. A 2019 film adaptation of his novella “The Wandering Earth” (in which we have to propel the planet clear of a swelling sun) earned nearly half a billion dollars in the first 10 days of its release. Meanwhile The Three-Body Problem and its two sequels have sold more than eight million copies worldwide. Now they’re being adapted for the screen, and not for the first time: the first two adaptations were domestic Chinese efforts. A 2015 film was suspended during production (“No-one here had experience of productions of this scale,” says Liu, speaking over a video link from a room piled with books.) The more recent TV effort is, from what I’ve seen of it, jolly good, though it only scratches the surface of the first book.

Now streaming service Netflix is bringing Liu’s whole trilogy to a global audience. Clean behind your sofa, because you’re going to need somewhere to hide from an alien visitation quite unlike any other.

For some of us, that invasion will come almost as a relief. So many English-speaking sf writers these days spend their time bending over backwards, offering “design solutions” to real-life planetary crises, and especially to climate change. They would have you believe that science fiction is good for you.

Liu, a bona fide computer engineer in his mid-fifties, is immune to such virtue signalling. “From a technical perspective, sf cannot really help the world,” he says. “Science fiction is ephemeral, because we build it on ideas in science and technology that are always changing and improving. I suppose we might inspire people a little.”

Western media outlets tend to cast Liu — a domestic celebrity with a global reputation and a fantastic US sales record — as a put-upon and presumably reluctant spokesperson for the Chinese Communist Party. The Liu I’m speaking to is garrulous, well-read, iconoclastic, and eager. (It’s his idea that we end up speaking for nearly an hour more than scheduled.) He’s hard-headed about human frailty and global Realpolitik, and he likes shocking his audience. He believes in progress, in technology, and, yes — get ready to clutch your pearls — he believes in his country. But we’ll get to that.

We promised you disaster and misfortune. In The Three-Body Problem, the great Trisolaran Fleet has already set sail from its impossibly inhospitable homeworld orbiting three suns. (What does not kill you makes you stronger, and their madly unpredictable environment has made the Trisolarans very strong indeed.) They’ll arrive in 450 years or so — more than enough time, you would think, for us to develop technology advanced enough to repel them. That is why the Trisolarans have sent two super-intelligent proton-sized super-computers at near-light speed to Earth, to mess with our minds, muddle our reality, and drive us into self-hatred and despair. Only science can save us. Maybe.
The forthcoming Netflix adaptation is produced by Game of Thrones’s David Benioff and D.B. Weiss and True Blood’s Alexander Woo. In covering all three books, it will need to wrap itself around a conflict that lasts millennia, and realistically its characters won’t be able to live long enough to witness more than fragments of the action. The parallel with the downright deathy Game of Thrones is clear: “I watched Game of Thrones before agreeing to the adaptation,” says Liu. “I found it overwhelming — quite shocking, but in a positive way.”

By the end of its run, Game of Thrones had become as solemn as an owl, and that approach won’t work for The Three-Body Problem, which leavens its cosmic pessimism (a universe full of silent, hostile aliens, stalking their prey among the stars) with long, delightful episodes of sheer goofiness — including one about a miles-wide Trisolaran computer chip made up entirely of people in uniform, marching about, galloping up and down, frantically waving flags…

A computer chip the size of a town! A nine-dimensional supercomputer the size of a proton! How on Earth does Liu build engaging stories from such baubles? Well, says Liu, you need a particular kind of audience — one for whom anything seems possible.
“China’s developing really fast, and people are confronting opportunities and challenges that make them think about the future in a wildly imaginative and speculative way,” he explains. “When China’s pace of development slows, its science fiction will change. It’ll become more about people and their everyday experiences. It’ll become more about economics and politics, less about physics and astronomy. The same has already happened to western sf.”

Of course, it’s a moot point whether anything at all will be written by then. Liu reckons that within a generation or two, artificial intelligence will take care of all our entertainment needs. “The writers in Hollywood didn’t strike over nothing,” he observes. “All machine-made entertainment requires, alongside a few likely breakthroughs, is ever more data about what people write and consume and enjoy.” Liu, who claims to have retired and to have no skin in this game any more, points to a recent Chinese effort, the AI-authored novel Land of Memories, which won second prize in a regional sf competition. “I think I’m the final generation of writers who will create novels based purely on their own thinking, without the aid of artificial intelligence,” he says. “The next generation will use AI as an always-on assistant. The generation after that won’t write.”

Perhaps he’s being mischievous (a strong and ever-present possibility). He may just be spinning some grand-sounding principle out of his own charmingly modest self-estimate. “I’m glad people like my work,” he says, “but I doubt I’ll be remembered even ten years from now. I’ve not written very much. And the imagination I’ve been able to bring to bear on my work is not exceptional.” His list of influences is long. His father bought him Wells and Verne in translation. Much else, including Kurt Vonnegut and Ray Bradbury, required translating word for word with a dictionary. “As an sf writer, I’m optimistic about our future,” Liu says. “The resources in our solar system alone can feed about 100,000 planet Earths. Our future is potentially limitless — even within our current neighbourhood.”

Wrapping our heads around the scales involved is tricky, though. “The efforts countries are taking now to get off-world are definitely meaningful,” he says, “but they’re not very realistic. We have big ideas, and Elon Musk has some exciting propulsion technology, but the economic base for space exploration just isn’t there. And this matters, because visiting neighbouring planets is a huge endeavour, one that makes the Apollo missions of the Sixties and Seventies look like a fast train ride.”

Underneath such measured optimism lurks a pessimistic view of our future on Earth. “More and more people are getting to the point where they’re happy with what they’ve got,” he complains. “They’re comfortable. They don’t want to make any more progress. They don’t want to push any harder. And yet the Earth is pretty messed up. If we don’t get into space, soon we’re not going to have anywhere to live at all.”

The trouble with writing science fiction is that everyone expects you have an instant answer to everything. Back in June 2019, a New Yorker interviewer asked him what he thought of the Uighurs (he replied: a bunch of terrorists) and their treatment at the hands of the Chinese government (he replied: firm but fair). The following year some Republican senators in the US tried to shame Netflix into cancelling The Three-Body Problem. Netflix pointed out (with some force) that the show was Benioff and Weiss and Woo’s baby, not Liu’s. A more precious writer might have taken offence, but Liu thinks Netflix’s response was spot-on. ““Neither Netflix nor I wanted to think about these issues together,” he says.

And it doesn’t do much good to spin his expression of mainstream public opinion in China (however much we deplore it) into some specious “parroting [of] dangerous CCP propaganda”. The Chinese state is monolithic, but it’s not that monolithic — witness the popular success of Liu’s own The Three Body Problem, in which a girl sees her father beaten to death by a fourteen-year-old Red Guard during the Cultural Revolution, grows embittered during what she expects will be a lifetime’s state imprisonment, and goes on to betray the entire human race, telling the alien invaders, “We cannot save ourselves.”

Meanwhile, Liu has learned to be ameliatory. In a nod to Steven Pinker’s 2011 book The Better Angels of Our Nature, he points out that while wars continue around the globe, the bloodshed generated by warfare has been declining for decades. He imagines a world of ever-growing moderation — even the eventual melting away of the nation state.

When needled, he goes so far as to be realistic: “No system suits all. Governments are shaped by history, culture, the economy — it’s pointless to argue that one system is better than another. The best you can hope for is that they each moderate whatever excesses they throw up. People are not and never have been free to do anything they want, and people’s idea of what constitutes freedom changes, depending on what emergency they’re having to handle.”

And our biggest emergency right now? Liu picks the rise of artificial intelligence, not because our prospects are so obviously dismal (though killer robots are a worry), but because mismanaging AI would be humanity’s biggest own goal ever: destroyed by the very technology that could have taken us to the stars!

Ungoverned AI could quite easily drive a generation to rebel against technology itself. “AI has been taking over lots of peoples’ jobs, and these aren’t simple jobs, these are what highly educated people expected to spend lifetimes getting good at. The employment rate in China isn’t so good right now. Couple that with badly managed roll-outs of AI, and you’ve got frustration and chaos and people wanting to destroy the machines, just as they did at the beginning of the industrial revolution.”

Once again we find ourselves in a dark place. But then, what did you expect from a science fiction writer? They sparkle best in the dark. And for those who don’t yet know his work, Liu is pleased, so far, with Netflix’s version of his signature tale of interstellar terror, even if its westernisation does baffle him at times.

“All these characters of mine that were scientists and engineers,” he sighs. “They’re all politicians now. What’s that about?”

Making time for mistakes

Reading In the Long Run: The future as a political idea by Jonathan White for the Financial Times, 2 February 2024

If you believe there really is no time for political mistakes on some crucial issue — climate change, say, or the threat of nuclear annihilation — then why should you accept a leader you did not vote for, or endorse an election result you disagree with? Jonathan White, a political sociologist at the London School of Economics, has written a short book about a coming crisis that democratic politics, he argues, cannot possibly accommodate: the world’s most technologically advanced democracies are losing their faith in the future.

This is not a new thought. In her 2007 book The Shock Doctrine Naomi Klein predicted how governments geared to crisis management would turn ever more dictatorial as their citizens grew ever more distracted and malleable. In the Long Run White is less alarmist but more pessimistic, showing how liberal democracy blossoms, matures, and ultimately shrivels through the way it imagines its own future. Can it survive in the world where high-school students are saying things like ‘I don’t understand why I should be in school if the world is burning’?

A broken constitution, an electorate that’s ignorant or misguided, institutions that are moribund and full of the same old faces, year after year — these are not nearly the serious problems for democracy they appear to be, says White: none of them undermines the ideal, so long as we believe that there’s a process of self-correction going on.

Democracy is predicated on an idea of improvability. It is, says White, “a future-oriented form, always necessarily unfinished”. The health of a democracy lies not in what it thinks of itself now, but in what hopes it has for its future. A few pages on France’s Third Republic — a democratic experiment that, from latter part of the 19th century to the first decades of the 20th, lurched through countless crises and 103 separate cabinets to become the parliamentary triumph of its age — would have made a wonderful digression here, but this is not White’s method. In the Long Run relies more on pithy argument than on historical colour, offering us an exhilarating if sometimes dizzingly abstract historical fly-through of the democratic experiment.

Democracy arose as an idea in the Enlightenment, via the evolution of literary Utopias. White pays special attention to Louis-Sébastien Mercier’s 1771 novel The Year 2440: A Dream if Ever There Was One, for dreaming up institutions that are not just someone’s good idea, but actual extensions of the people’s will.

Operating increasingly industrialised democracies over the course of the 19th century created levels of technocratic management that inevitably got in the way of the popular will. When that process came to a crisis in the early years of the 20th century, much of Europe faced a choice between command-and-control totalitarianism, and beserk fascist populism.

And then fascism, in its determination to remain responsive and intuitive to the people’s will, evolved into Nazism, “an ideology that was always seeking to shrug itself off,” White remarks; “an -ism that could affirm nothing stable, even about itself”. Its disastrous legacy spurred post-war efforts to constrain the future once more, “subordinating politics to economics in the name of stability.” With this insightful flourish, the reader is sent reeling into the maw of the Cold War decades, which turned politics into a science and turned our tomorrows into classifiable resources and tools of competitive advantage.

White writes well about 20th-century ideologies and their endlessly postponed utopias. The blandishments of Stalin and Mao and other socialist dictators hardly need glossing. Mind you, capitalism itself is just as anchored in the notion of jam tomorrow: what else but a faith in the infinitely improvable future could have us replacing our perfectly serviceable smartphones, year after year after year?

And so to the present: has runaway consumerism now brought us to the brink of annihilation, as the Greta Thunbergs of this world claim? For White’s purposes here, the truth of this claim matters less than its effect. Given climate change, spiralling inequality, and the spectres of AI-driven obsolescence, worsening pandemics and even nuclear annihilation, who really believes tomorrow will look anything like today?

How might democracy survive its own obsession with catastrophe? It is essential, White says, “not to lose sight of the more distant horizons on which progressive interventions depend.” But this is less a serious solution, more an act of denial. White may not want to grasp the nettle, but his readers surely will: by his logic (and it seems ungainsayable), the longer the present moment lasts, the worse it’ll be for democracy. He may not have meant this, but White has written a very frightening book.

This is not how science is done!

Reading J. Craig Venter & David Ewing Duncan’s Microlands for the Telegraph

Scientists! Are you having fun? Then stop it. Be as solemn as an owl, or else. Your career depends on it. Discoveries are all very well for the young, but dogma is what gets you tenure. Any truths you uncover must be allowed to ossify through constant poker-faced repetition. And Heaven forbid that before your death, a new idea comes along, forcing you to recalculate and re-envision your life’s work!

Above all, do not read Microlands. Do not be captivated by its adventures, foreign places and radical ideas. This is not how science is done!

Though his book edges a little too close to corporate history to be particularly memorable, it is clear that science journalist David Duncan has had an inordinate amount of fun co-writing this account of ocean-going explorations, led by biotechnologist Craig Venter between 2003 and 2018, into the microbiome of the Earth’s oceans.

While it explains with admirable clarity the science and technology involved in this global ocean sampling expedition, Microlands also serves as Duncan’s paean to Venter himself, who in 2000 disrupted the gene sequencing industry before it was even a thing by quickly and cheaply sequencing the human genome. Eight years later he was sailing around the world on a mission to sequence the genome of the entire planet — a classic bit of Venter hyperbole, this, ”almost embarrassingly grandiose” according to Duncan — but as Duncan says, “did he really mean it literally? Does it matter?”

It ought to matter. Duncan is too experienced a journalist to buy into the cliche of Venter the maverick scientist. According to Duncan, his subject is less a gifted visionary than a supreme and belligerent tactician, who advances his science and his career by knowing whom to offend. He’s an entrepreneur, not an academic, and if his science was off by even a little, his ideas about the microbial underpinnings of life on Earth wouldn’t have lasted (and wouldn’t have deserved to last) five minutes.

But here’s the thing: Venter’s ideas have been proved right, again and again. In the late 1990s he conceived a technology to read a long DNA sequence: first it breaks the string into readable pieces, then, by spotting overlaps, it strings the pieces back into the right order. A decade later he realised the same machinery could handle multiple DNA strands — it would simply deliver several results instead of just one. And if it could produce two or three readings, why not hundreds? Why not thousands? Why not put buckets of seawater through a sieve and sequence the microbiome of entire oceans?

And — this is what really annoys Venter’s critics — why not have some fun in the process? Why not gather water samples while sailing around the world on a cutting-edge sailboat, “a hundred-foot-long sliver of fiberglass and Kevlar”, and visiting some of the most beautiful and out-of-the-way places on Earth?

It is amusing and inspiring to learn how business acumen has helped Venter to a career more glamorous than those enjoyed by his peers. More important is the way in which his ocean sampling project has changed our ideas of how biology is done.

For over a century, biology has been evolving from a descriptive science into an experimental one. Steadily, the study of living things has given ground to efforts to unpick the laws of life.
But Venters’ project has uncovered so much diversity in aquatic microbial worlds, the standard taxonomy of kingdom, phylum, and species breaks down in an effort to capture its richness. At the microbial scale, every tiny thing reveals itself to be a special and unique snowflake. Genes pass promiscuously from bacterium to bacterium, ferried there very often by viruses, since they survive longer, the more energy-producing powers they can “download” into their host cell. We already know microbial evolution takes place on a scale of hours. Now it turns out the mechanisms of that evolution are so various and plastic, we can barely formalise them. “Laws of biology” may go some way to explain creatures as big as ourselves, but at the scale of bacteria and viruses, archaea and protozoa, wild innovation holds sway.

The field is simply overwhelmed by the quantity of data Venter’s project has generated. Discovering whether microbes follow fundamental ecological ‘laws’ at a planetary scale will likely require massive, monolithic cross-environment surveys — and many further adventure-travel vacations posing as expeditions by provoking tycoons who love to sail.

Here’s the capping irony, and Duncan does it proud — that Venter, the arch-entrepreneur of cutting-edge genetic science, is returning biology to a descriptive science. We are just going to have to go out and observe what is there — and, says Venter, “that’s probably where biology will be for the next century at least.”

Which way’s up?

Reading Our Moon: A human history by Rebecca Boyle for the Telegraph, 4 January 2024

If people on the Moon weigh only one-sixth as much as they do on Earth, why did so many Apollo astronauts fall flat on their faces the moment they got there? They all managed to get up again, so their spacesuits couldn’t have been that cumbersome. The trouble, science writer Rebecca Boyle explains in Our Moon, was that there wasn’t enough gravity to keep the astronauts orientated. Even with the horizon as a visual cue, it’s easy to lose track of which way’s up.

Boyle lays out – in a manner that reminded me of Oliver Morton and his daunting 2020 book, The Moon: A History for the Future – all the ways in which our natural satellite, once you reach it, is not a “place” at all — at least, not in the earthly sense. Its horizon is not where you think it is. Its hills could be mere hummocks or as tall as Mount Fuji: you can’t tell from looking. Strangest of all, says Boyle, “time seems to stop up there. It proceeds according to the rhythm of your heart, and maybe the beeping of your spacesuit’s life-support system, but if you could just stand there for an hour or two in silence, you would notice nothing about the passage of time.”

15 to 20 per cent of us today doubt NASA astronauts ever landed there. This tiresome contrarian affectation has this, at least, to be said for it: that it lets us elude that sense of creeping post-Apollo anticlimax, so well articulated by Michael Collins – who orbited the Moon but didn’t walk on it – when he compared it to a “withered, sun-seared peach pit”. “Its invitation is monotonous,” he wrote in his 1974 memoir, “and meant for geologists only.” Boyle puts a positive spin on the geology, calling the Moon “Earth’s biographer, its first chronicler, and its most thorough accountant.” Our Moon is a pacey, anecdotal account of how the Moon has shaped our planet, our history and our understanding of both.

Necessarily, this means that Boyle spends much of her book side-eyeing her ostensible subject. Never mind the belligerent rock itself – “like Dresden in May or Hiroshima in August”, according to the columnist Milton Meyer – the Moon’s mass, its angular momentum and its path through space dominate most chapters here. Without a massive moon churning it up over 4.5 billion years, the Earth would by now be geologically senescent, and whatever nutrients its internal mechanics generated would be lying undisturbed on the seafloor.

Not that there would be much, in that case, that needed nutrition. Without the Moon to carry so much of the Earth-Moon system’s angular momentum, Boyle explains, gravitational interference from Jupiter “would push Earth around like a playground bully”, making life here, even if it arose, a temporary phenomenon. As it is, the Moon stirs the Earth’s core and mantle, and keeps its interior sizzling. It whips the oceans into a nutritious broth. It dishes up fish onto little tidal pools, where they evolve (or evolved, rather: this only happened once) into lobe-fish, then lung-fish, then amphibians, then – by and by – us.

The more self-evidently human part of Boyle’s “human history” begins in Aberdeenshire, where Warren Field’s 10,000-year-old pits – a sort of proto-Stonehenge in reverse – are a timepiece, enabling the earliest farmers to adjust and reset their lunar calendars. These pits are the earliest astronomical calendar we know of, but not the most spectacular. Boyle propels us enthusiastically from the Berlin Gold Hat – an astronomical calculator-cum-priestly headpiece from the Bronze Age – to the tale of Enheduanna, the high priestess who used hymns to Moon gods to bind the city-states of 2nd-millennium BC Sumeria into the world’s first empire. And we go from there, via many a fascinating byway, to the Greek philosopher Anaxagoras, whose explanation of moonlight as mere reflected sunlight ought, you would think, to have punctured the Moon’s ritual importance.

But the Moon is a trickster, and its emotional influence is not so easily expunged. Three hundred years later Aristotle conjectured that the brain’s high water content made it susceptible to the phases of the Moon. This, for the longest while, was (and for some modern fans of astrology, still is) as good an explanation as any for the waxing and waning of our manias and melancholies.

Thrown back at last upon the Moon itself, the brute and awkward fact of it, Boyle asks: “Why did we end up with a huge moon, one-fourth of Earth’s own heft? What happened in that cataclysm that ended up in a paired system of worlds, one dry and completely dead, and one drenched in water and life?” Answering this lot practically demands a book of its own. Obviously Boyle can’t be expected to do everything, but I would have liked her to pay more attention to lunar craters, whose perfect circularity confused generations of astronomers. (For this reason alone, James L Powell’s recent book Unlocking the Moon’s Secrets makes an excellent companion to Boyle’s more generalist account.)

Boyle brings her account to a climax with the appearance of Theia, a conjectural, but increasingly well-evidenced, protoplanet, about the size of Mars, whose collision with the early Earth almost vaporised both planets and threw off the material that accreted into the Moon. Our Moon is superb: as much a feat of imagination as it is a work of globe-trotting scholarship. Given the sheer strangeness of the Moon’s creation story, it will surely inspire its readers to dig deeper.