More believable than the triumph

Visiting In Event of Moon Disaster at the Sainsbury Centre, University of East Anglia, for the Telegraph, 16 February 2024

20:05 GMT on 20 July 1969: astronauts Neil Armstrong and Buzz Aldrin are aboard Apollo l1’s Lunar Command Module, dropping steadily towards the lunar surface in humankind’s first attempt to visit another world.

“Drifting to the right a little,” Buzz remarks — and then an alarm goes off, and then another, and another, until at last the transmission breaks down.

The next thing we see is a desk set in front of a blue curtain, and flanked by flags: the Stars and Stripes, and the Presidential seal. Richard Nixon, the US President, takes his seat and catches the eye of figures hovering off-screen: is everything ready?

And so he begins; it’s a speech no one can or will forget. It was written by his speechwriter, William Safire, as a contingency in the event that Buzz and Neil land on the Moon in a way that leaves them alive but doomed, stranded without hope of rescue in the Sea of Tranquility.

“These brave men… know that there is no hope for their recovery.” Nixon swallows hard. “But they also know that there is hope for Mankind in their sacrifice.”

From 17 February, Richard Nixon’s speech will play to visitors to the Sainsbury Centre in Norwich. They will watch it from the comfort of a 1960s-era sofa, in a living room decked out in such a way as to transport them back to that day, in June 1969, when two heroes found themselves doomed and alone and sure to die on the Moon.

Confronted with Nixon struggling to control his emotions on a period TV, they may well ask themselves if what they are seeing is real. The props are real, and so is the speech, marking and mourning the death of two American heroes. Richard Nixon is real, or as real as anyone can be on TV. His voice and gestures are his own (albeit — and we’ll come to this in a moment — strung together by generative computer algorithms).

Will anyone be fooled?

Not me. I can remember Apollo 11’s successful landing, and the crew’s triumphant return to Earth less than a week later, on 24 July. But, hang on — what, exactly, do I remember? I was two. If my parents had told me, over and over, that they had sat me down in front of TV coverage of the Kennedy assassination, I would probably have come to believe that, too. Memory is unreliable, and people are suggestible.

Jago Cooper includes the installation In Event of Moon Disaster in the Sainsbury Centre’s exhibition “What Is Truth”. Cooper, who directs the centre, wasn’t even born when Apollo 11 rose from the launchpad. Neither were the two filmmakers, Halsey Burgund and Francesca Panetta, who won a 2021 Emmy for In Event Of Moon Disaster in the category of Interactive Media Documentary. The bottom line here seems to be: the past exists only because we trust what others say about it.

Other exhibits in the “What is Truth?” season will come at the same territory from different angles. There are artworks about time and artworks about identity. In May, an exhibition entitled The Camera Never Lies uses war photography from a private collection, The Incite Project, to reveal how a few handfuls of images have shaped our narratives of conflict. This is the other thing to remember, as we contemplate a world awash with deepfakes and avatars: the truth has always been up for grabs.

Sound artist Halsey Burgund and artist-technologist Francesca Panetta recruited experts in Israel and Ukraine to help realise In Event Of Moon Disaster. Actor Louis Wheeler spent days in a studio, enacting Nixon’s speech; the President’s face, posture and mannerisms were assembled from archive footage of a speech about Vietnam.

President Nixon’s counterfactual TV eulogy was produced by the MIT Center for Advanced Virtuality to highlight the malleability of digital images. It’s been doing the rounds of art galleries and tech websites since 2019, and times have moved on to some degree. Utter the word “deepfake” today and you’re less likely to conjure up images of a devastated Richard Nixon as gossip about those pornographic deepfake images of Taylor Swift, viewed 27 million times in 19 hours when they were circulated this January on Twitter.

No-one imagines for second that Swift had anything to do with them, of course, so let’s be positive here: MIT’s message about not believing everything you see is getting through.

As a film about deepfakes, In Event of Moon Disaster is strangely reassuring. It’s a work of genuine creative brilliance. It’s playful: we feel warmer towards Richard Nixon in this difficult fictional moment than we probably ever felt about him in life. It’s educational: the speech, though it never had to be delivered (thank God), is real enough, an historical document that reveals how much was at stake on that day. And in a twisted way, the film is immensely respectful, singing the praises of extraordinary men in terms only tragedy can adequately articulate.

As a film about the Moon, though, In Event of Moon Disaster is a very different kettle of fish and frankly disturbing. You can’t help but feel, having watched it, that Burgund and Panetta’s synthetic moon disaster is more believable than Apollo’s actual, historical triumph.

The novelist Norman Mailer observed early on that “in another couple of years there will be people arguing in bars about whether anyone even went to the Moon.” And so it came to pass: claims that the moon landings were fake began the moment the Apollo missions ended in 1972.

The show’s curator Jago Cooper has a theory about this: “The Moon is such a weird bloody thing,” he says. “The idea that we merely pretended to walk about there is more believable than what actually happened. That’s the thing about our relationship with what we’re told: it has to be believable within our lived experience, or we start driving wedges into it that undermine its credibility.”

This raises a nasty possibility: that the more enormous our adventures, the less likely we are to believe them; and the crazier our world, the less attention we’ll pay to it. “Humankind cannot bear very much reality” said TS Eliot, and maybe we’re beginning to understand why.

For a start, we cannot bear too much information. The more we’re told about the world, the more we search for things that are familiar. In an essay accompanying the exhibition, curator Paul Luckraft finds us in thrall to confirmation bias “because we can’t see what’s new in the dizzying amount of text, image, video and audio fragments available to us.”

The deluge of information brought about by digital culture is already being weaponised — witness Trump’s former chief strategist Steve Bannon, who observed in 2018, ‘The real opposition is the media. And the way to deal with them is to flood the zone with shit.”
Even more disturbing: the world of shifting appearances ushered in by Bannon, Trump, Putin et al. might be the saving of us. In a recent book about the future of nuclear warfare, Deterrence under Uncertainty, RAND policy researcher Edward Geist conjures up a likely media-saturated future in which we all know full well that appearances are deceptive, but no-one has the faintest idea what is actually going on. Belligerents in such a world would never have to fire a shot in anger, says Geist, merely persuade the enemy that their adversary’s values are better than their own.

“Tricky Dick” Nixon would flourish in such a hyper-paranoid world, but then, so might we all. Imagine that perpetual peace is ours for the taking — so long as we abandon the faith in facts that put men on the Moon!

Fifty years ago you’d have struggled to find a anyone casting doubt on NASA’s achievement, that day in July 1969. Fifty years later, a YouGov poll found sixteen per cent of the British public believed the moon landing most likely never happened.

Deepfakes themselves aren’t the cause of such incredulity, but they have the potential to exacerbate it immeasurably — and this, says Halsey Burgund, is why he and Francesca Panetta were inspired to make In Event of Moon Disaster. “The hope of the project is to provide some simple awareness of this kind of technology, its ubiquity and out-there-ness,” he explains. “If we’ve made an aesthetically satisfying and emotional piece, so much the better — it’ll help people internalise the challenges facing us right now.” Though bullish in defence of the technology’s artistic possibilities, Burgund concedes that the harms it can wreak are real, and can be distributed at scale. (Ask Taylor Swift.) “It’s not as though intelligent people aren’t addressing these problems,” Burgund says. “But it takes a lot of time — and society can’t change that quickly.”

Infectious architecture

Visiting Small Spaces in the City at ROCA London Gallery for New Scientist, 12 February 2024

“Cook’s at it again,” reads one Antarctic station log entry from the 1970s. “Threw a lemon pie and cookies all over the galley… then went to his room for a couple of days and wouldn’t come out… no clear reason… probably antarcticitis catching up…”

And now it’s not just the behavioural challenges of small spaces that give designers pause, as they contemplate our ever-more constrained future. There’s our health to consider. Damp, mould and other problems endemic to small spaces are not so easily addressed, especially in cities where throwing open the windows and letting in air filled with particulates, spores, moulds and pollen can make matters measurably worse. In February 2013, nine-year-old Londoner Ella Kissi-Debrah became the first person in the UK to have air pollution listed as a cause of death. (Meanwhile a report by the Royal Institution of Chartered Surveyors published in 2017 reckons the average new home in London has shrunk by 20% since 2000.)

How are we to live and thrive in tiny spaces? Curator Clare Farrow’s new exhibition at ROCA London Gallery brings together ideas amd designs from around the world. She’s arranged an interview with Hong Kong-based Gary Chang, whose 32 square metre apartment currently boasts 24 different “rooms”, assembled by manoeuvring a system of sliding walls, and commissioned a film in which William Bracewell, a principal with London’s Royal Ballet, performs (somehow) in the tiny dressing room-cum-costume store he shares with two other dancers.

She’s also, for at least a couple of days (dates to be announced), got Richard Beckett, an architect based at the Bartlett School booth at the centre of the exhibition, to bring attention to the health challenges of “studio living”.

Beckett reckons we should be using microbes to make our buildings healthier. As he explains in a forthcoming paper, “As the built environment is now the predominant habitat of the human, the microbes that are present in buildings are of fundamental importance.” Alas, contemporary buildings are microbial wastelands: dry, nutrient poor and sterile.

In 2020 Beckett won an award from the Royal Institute of British Architects for embedding “beneficial bacteria” into ceramic and concrete surfaces. At ROCA he’ll be sitting in a booth dosed with this material, while Matthew Reeves, an immunologist at University College, London, uses regular blood samples to measure whether tile-borne pro-biotic species can survive long enough, and spread easily enough, to become part of Beckett’s personal microbiome.

“The official study will have to take place in a more controlled way after the exhibition’s finished,” Beckett admits, “but at least my spell in the booth is a bit of theatre to demonstrate what we’re up to.”

Explaining the work is vital, since it runs so counter to prevailing nostrums concerning hygiene and cleanliness. “One immediate application of our work is in hospitals and care homes,” Beckett says, “where super-sterile environments have ended up providing ideal breeding conditions for antibiotic-resistant bacteria. Of course the first question we’ll be asked is, ‘How do you clean them?’”

Beckett’s booth is tiled with what look like worm casts: these are 3D printed ceramic tiles, lightly baked and designed to shed bacteria into the air with every passing motion. Their peculiar surface texture is tantalising on purpose: touching them helps spread the healthy biota, filling sterile interiors (this is the plan) with sustainable microbial ecosystems.

“There’s still much that we don’t know about how microbes interact with each other and with our environment,” says Beckett, who is realistic about the time it will take for us to abandon the twentieth century’s wipe-clean aesthetic, and embrace the stain. “This work will prove its worth in small interiors first.”

Making time for mistakes

Reading In the Long Run: The future as a political idea by Jonathan White for the Financial Times, 2 February 2024

If you believe there really is no time for political mistakes on some crucial issue — climate change, say, or the threat of nuclear annihilation — then why should you accept a leader you did not vote for, or endorse an election result you disagree with? Jonathan White, a political sociologist at the London School of Economics, has written a short book about a coming crisis that democratic politics, he argues, cannot possibly accommodate: the world’s most technologically advanced democracies are losing their faith in the future.

This is not a new thought. In her 2007 book The Shock Doctrine Naomi Klein predicted how governments geared to crisis management would turn ever more dictatorial as their citizens grew ever more distracted and malleable. In the Long Run White is less alarmist but more pessimistic, showing how liberal democracy blossoms, matures, and ultimately shrivels through the way it imagines its own future. Can it survive in the world where high-school students are saying things like ‘I don’t understand why I should be in school if the world is burning’?

A broken constitution, an electorate that’s ignorant or misguided, institutions that are moribund and full of the same old faces, year after year — these are not nearly the serious problems for democracy they appear to be, says White: none of them undermines the ideal, so long as we believe that there’s a process of self-correction going on.

Democracy is predicated on an idea of improvability. It is, says White, “a future-oriented form, always necessarily unfinished”. The health of a democracy lies not in what it thinks of itself now, but in what hopes it has for its future. A few pages on France’s Third Republic — a democratic experiment that, from latter part of the 19th century to the first decades of the 20th, lurched through countless crises and 103 separate cabinets to become the parliamentary triumph of its age — would have made a wonderful digression here, but this is not White’s method. In the Long Run relies more on pithy argument than on historical colour, offering us an exhilarating if sometimes dizzingly abstract historical fly-through of the democratic experiment.

Democracy arose as an idea in the Enlightenment, via the evolution of literary Utopias. White pays special attention to Louis-Sébastien Mercier’s 1771 novel The Year 2440: A Dream if Ever There Was One, for dreaming up institutions that are not just someone’s good idea, but actual extensions of the people’s will.

Operating increasingly industrialised democracies over the course of the 19th century created levels of technocratic management that inevitably got in the way of the popular will. When that process came to a crisis in the early years of the 20th century, much of Europe faced a choice between command-and-control totalitarianism, and beserk fascist populism.

And then fascism, in its determination to remain responsive and intuitive to the people’s will, evolved into Nazism, “an ideology that was always seeking to shrug itself off,” White remarks; “an -ism that could affirm nothing stable, even about itself”. Its disastrous legacy spurred post-war efforts to constrain the future once more, “subordinating politics to economics in the name of stability.” With this insightful flourish, the reader is sent reeling into the maw of the Cold War decades, which turned politics into a science and turned our tomorrows into classifiable resources and tools of competitive advantage.

White writes well about 20th-century ideologies and their endlessly postponed utopias. The blandishments of Stalin and Mao and other socialist dictators hardly need glossing. Mind you, capitalism itself is just as anchored in the notion of jam tomorrow: what else but a faith in the infinitely improvable future could have us replacing our perfectly serviceable smartphones, year after year after year?

And so to the present: has runaway consumerism now brought us to the brink of annihilation, as the Greta Thunbergs of this world claim? For White’s purposes here, the truth of this claim matters less than its effect. Given climate change, spiralling inequality, and the spectres of AI-driven obsolescence, worsening pandemics and even nuclear annihilation, who really believes tomorrow will look anything like today?

How might democracy survive its own obsession with catastrophe? It is essential, White says, “not to lose sight of the more distant horizons on which progressive interventions depend.” But this is less a serious solution, more an act of denial. White may not want to grasp the nettle, but his readers surely will: by his logic (and it seems ungainsayable), the longer the present moment lasts, the worse it’ll be for democracy. He may not have meant this, but White has written a very frightening book.

A snapshot of how a city survives

Watching Occupied City by Steve McQueen for New Scientist, 31 January 2024

Artist and director Steve McQueen’s new documentary unfolds at a leisurely pace. Viewers will be glad of the 15-minute intermission baked into the footage, some two hours into the film’s over-four-hour runtime. If you need to make a fast getaway, now’s your chance — but I’ll bet the farm that you’ll return to your seat.

McQueen, a Londoner, now lives in Amsterdam with his wife Bianca Stigter, and Occupied City is based on Atlas of an Occupied City, Amsterdam 1940-1945, Stigter’s monumental account of the city’s wartime Nazi occupation.

Narrator Melanie Hyams recites the book’s gazetteer of the occupation, address by address, while McQueen films each place as it appears today. Here is the street market where they used to hand out Star of David patches to the city’s Jews. (60,000 of the city’s 80,000 Jews were expelled during the second world war, and almost all of those taken were subsequently murdered.) Outside this now busy cafe, someone once found a potato in the gutter, and burned a book to cook it. At this site, in the “Hunger Winter” of 1944-1945, the diving boards at a since demolished swimming pool were chopped up for firewood. Here, a family was saved. There, a resistance worker was betrayed.

Though many of the buildings still stand, the word “demolished” recurs again and again, and it’s rare that McQueen’s street photography does not capture some new bit of demolition or construction. Amsterdam does not stay still. So how does a living, changing city remember itself?

There are acts of commemoration of course — among them a royal visit to a Jewish holocaust memorial, and a municipal apology for the predations of the city’s participation in the slave trade. But a city’s identity runs deeper than memorials surely? Do drinkers at this bar remember the Jews who were beaten outside their windows? Do the occupants of that flat know about the previous owners, a Jewish couple who committed suicide, sooner than live under Nazi occupation?

Stigter’s Atlas is an act of remembrance. Her husband’s film is different: a snapshot of how a city survives being managed and choreographed, corralled and contained. Some of Occupied City was shot during a five-week Covid lockdown. We see the modern city beset by plague, even as we hear of how, in the past, it was brought near to destruction by foreign occupation. McQueen draws no facile parallels here. Rather, we’re encouraged to see that restrictions are restrictions and curfews are curfews, whoever imposes them, and whatever their motives. What’s interesting is to see how people react to civil control, as it becomes (whether through necessity or not) increasingly heavy-handed.
At a big anti-fascist rally, conducted outside the city’s Concertgebouw concert hall, a speaker announces that “Democracy is more fragile then ever.”

Is it, though? Occupied City would suggest otherwise. It’s a film full of ordinary people, eating, playing guitar (badly), playing videogames, smoking, sheltering from the rain, and walking dogs in the mist. It’s a film about citizenry who survived one lethal onslaught now handling another one — not so obviously violent, perhaps, but pervasive and undoubtedly lethal.

Occupied City is not about what people believe. It’s about how they behave. And, lo and behold, people are mostly decent. Leave us alone, and we’ll go tobogganing, or skating, or cycling, or dancing. We’re civically minded by nature. The nightmares, the riots, the beating and betrayals — these only surface when you start putting us in boxes.

A spirit of anarchism pervades this monumental movie. It’s not anti-authoritarian, exactly; it’s just not that interested in what authority thinks. Reeling as we are from the dislocations of Covid, it’s a comfort, and a challenge, to be reminded that cities are, when you come down to it, nothing more than their people.

This is not how science is done!

Reading J. Craig Venter & David Ewing Duncan’s Microlands for the Telegraph

Scientists! Are you having fun? Then stop it. Be as solemn as an owl, or else. Your career depends on it. Discoveries are all very well for the young, but dogma is what gets you tenure. Any truths you uncover must be allowed to ossify through constant poker-faced repetition. And Heaven forbid that before your death, a new idea comes along, forcing you to recalculate and re-envision your life’s work!

Above all, do not read Microlands. Do not be captivated by its adventures, foreign places and radical ideas. This is not how science is done!

Though his book edges a little too close to corporate history to be particularly memorable, it is clear that science journalist David Duncan has had an inordinate amount of fun co-writing this account of ocean-going explorations, led by biotechnologist Craig Venter between 2003 and 2018, into the microbiome of the Earth’s oceans.

While it explains with admirable clarity the science and technology involved in this global ocean sampling expedition, Microlands also serves as Duncan’s paean to Venter himself, who in 2000 disrupted the gene sequencing industry before it was even a thing by quickly and cheaply sequencing the human genome. Eight years later he was sailing around the world on a mission to sequence the genome of the entire planet — a classic bit of Venter hyperbole, this, ”almost embarrassingly grandiose” according to Duncan — but as Duncan says, “did he really mean it literally? Does it matter?”

It ought to matter. Duncan is too experienced a journalist to buy into the cliche of Venter the maverick scientist. According to Duncan, his subject is less a gifted visionary than a supreme and belligerent tactician, who advances his science and his career by knowing whom to offend. He’s an entrepreneur, not an academic, and if his science was off by even a little, his ideas about the microbial underpinnings of life on Earth wouldn’t have lasted (and wouldn’t have deserved to last) five minutes.

But here’s the thing: Venter’s ideas have been proved right, again and again. In the late 1990s he conceived a technology to read a long DNA sequence: first it breaks the string into readable pieces, then, by spotting overlaps, it strings the pieces back into the right order. A decade later he realised the same machinery could handle multiple DNA strands — it would simply deliver several results instead of just one. And if it could produce two or three readings, why not hundreds? Why not thousands? Why not put buckets of seawater through a sieve and sequence the microbiome of entire oceans?

And — this is what really annoys Venter’s critics — why not have some fun in the process? Why not gather water samples while sailing around the world on a cutting-edge sailboat, “a hundred-foot-long sliver of fiberglass and Kevlar”, and visiting some of the most beautiful and out-of-the-way places on Earth?

It is amusing and inspiring to learn how business acumen has helped Venter to a career more glamorous than those enjoyed by his peers. More important is the way in which his ocean sampling project has changed our ideas of how biology is done.

For over a century, biology has been evolving from a descriptive science into an experimental one. Steadily, the study of living things has given ground to efforts to unpick the laws of life.
But Venters’ project has uncovered so much diversity in aquatic microbial worlds, the standard taxonomy of kingdom, phylum, and species breaks down in an effort to capture its richness. At the microbial scale, every tiny thing reveals itself to be a special and unique snowflake. Genes pass promiscuously from bacterium to bacterium, ferried there very often by viruses, since they survive longer, the more energy-producing powers they can “download” into their host cell. We already know microbial evolution takes place on a scale of hours. Now it turns out the mechanisms of that evolution are so various and plastic, we can barely formalise them. “Laws of biology” may go some way to explain creatures as big as ourselves, but at the scale of bacteria and viruses, archaea and protozoa, wild innovation holds sway.

The field is simply overwhelmed by the quantity of data Venter’s project has generated. Discovering whether microbes follow fundamental ecological ‘laws’ at a planetary scale will likely require massive, monolithic cross-environment surveys — and many further adventure-travel vacations posing as expeditions by provoking tycoons who love to sail.

Here’s the capping irony, and Duncan does it proud — that Venter, the arch-entrepreneur of cutting-edge genetic science, is returning biology to a descriptive science. We are just going to have to go out and observe what is there — and, says Venter, “that’s probably where biology will be for the next century at least.”

Which way’s up?

Reading Our Moon: A human history by Rebecca Boyle for the Telegraph, 4 January 2024

If people on the Moon weigh only one-sixth as much as they do on Earth, why did so many Apollo astronauts fall flat on their faces the moment they got there? They all managed to get up again, so their spacesuits couldn’t have been that cumbersome. The trouble, science writer Rebecca Boyle explains in Our Moon, was that there wasn’t enough gravity to keep the astronauts orientated. Even with the horizon as a visual cue, it’s easy to lose track of which way’s up.

Boyle lays out – in a manner that reminded me of Oliver Morton and his daunting 2020 book, The Moon: A History for the Future – all the ways in which our natural satellite, once you reach it, is not a “place” at all — at least, not in the earthly sense. Its horizon is not where you think it is. Its hills could be mere hummocks or as tall as Mount Fuji: you can’t tell from looking. Strangest of all, says Boyle, “time seems to stop up there. It proceeds according to the rhythm of your heart, and maybe the beeping of your spacesuit’s life-support system, but if you could just stand there for an hour or two in silence, you would notice nothing about the passage of time.”

15 to 20 per cent of us today doubt NASA astronauts ever landed there. This tiresome contrarian affectation has this, at least, to be said for it: that it lets us elude that sense of creeping post-Apollo anticlimax, so well articulated by Michael Collins – who orbited the Moon but didn’t walk on it – when he compared it to a “withered, sun-seared peach pit”. “Its invitation is monotonous,” he wrote in his 1974 memoir, “and meant for geologists only.” Boyle puts a positive spin on the geology, calling the Moon “Earth’s biographer, its first chronicler, and its most thorough accountant.” Our Moon is a pacey, anecdotal account of how the Moon has shaped our planet, our history and our understanding of both.

Necessarily, this means that Boyle spends much of her book side-eyeing her ostensible subject. Never mind the belligerent rock itself – “like Dresden in May or Hiroshima in August”, according to the columnist Milton Meyer – the Moon’s mass, its angular momentum and its path through space dominate most chapters here. Without a massive moon churning it up over 4.5 billion years, the Earth would by now be geologically senescent, and whatever nutrients its internal mechanics generated would be lying undisturbed on the seafloor.

Not that there would be much, in that case, that needed nutrition. Without the Moon to carry so much of the Earth-Moon system’s angular momentum, Boyle explains, gravitational interference from Jupiter “would push Earth around like a playground bully”, making life here, even if it arose, a temporary phenomenon. As it is, the Moon stirs the Earth’s core and mantle, and keeps its interior sizzling. It whips the oceans into a nutritious broth. It dishes up fish onto little tidal pools, where they evolve (or evolved, rather: this only happened once) into lobe-fish, then lung-fish, then amphibians, then – by and by – us.

The more self-evidently human part of Boyle’s “human history” begins in Aberdeenshire, where Warren Field’s 10,000-year-old pits – a sort of proto-Stonehenge in reverse – are a timepiece, enabling the earliest farmers to adjust and reset their lunar calendars. These pits are the earliest astronomical calendar we know of, but not the most spectacular. Boyle propels us enthusiastically from the Berlin Gold Hat – an astronomical calculator-cum-priestly headpiece from the Bronze Age – to the tale of Enheduanna, the high priestess who used hymns to Moon gods to bind the city-states of 2nd-millennium BC Sumeria into the world’s first empire. And we go from there, via many a fascinating byway, to the Greek philosopher Anaxagoras, whose explanation of moonlight as mere reflected sunlight ought, you would think, to have punctured the Moon’s ritual importance.

But the Moon is a trickster, and its emotional influence is not so easily expunged. Three hundred years later Aristotle conjectured that the brain’s high water content made it susceptible to the phases of the Moon. This, for the longest while, was (and for some modern fans of astrology, still is) as good an explanation as any for the waxing and waning of our manias and melancholies.

Thrown back at last upon the Moon itself, the brute and awkward fact of it, Boyle asks: “Why did we end up with a huge moon, one-fourth of Earth’s own heft? What happened in that cataclysm that ended up in a paired system of worlds, one dry and completely dead, and one drenched in water and life?” Answering this lot practically demands a book of its own. Obviously Boyle can’t be expected to do everything, but I would have liked her to pay more attention to lunar craters, whose perfect circularity confused generations of astronomers. (For this reason alone, James L Powell’s recent book Unlocking the Moon’s Secrets makes an excellent companion to Boyle’s more generalist account.)

Boyle brings her account to a climax with the appearance of Theia, a conjectural, but increasingly well-evidenced, protoplanet, about the size of Mars, whose collision with the early Earth almost vaporised both planets and threw off the material that accreted into the Moon. Our Moon is superb: as much a feat of imagination as it is a work of globe-trotting scholarship. Given the sheer strangeness of the Moon’s creation story, it will surely inspire its readers to dig deeper.

The world’s biggest money machine

Reading Who Owns This Sentence by David Bellos and Alexandre Montagu for the Telegraph, 3 January 2024

Is there such a thing as intellectual property? Once you’ve had an idea, and disseminated it through manuscript or sculpture, performance or song, is it still yours?

The ancients thought so. Long before copyright was ever dreamed of, honour codes policed the use and reuse of the work of poets and playwrights, and throughout the history of the arts, proven acts of plagiarism have brought down reputational damage sufficient to put careless and malign scribblers and daubers out of business.

At the same time, it has generally been acceptable to repurpose a work, for satire or even for further development. Pamela had many more adventures outside of Samuel Richardson’s novel than within it, though (significantly) it is Richardson’s original novel that people still buy.

No one in the history of the world has ever argued that artists should not be remunerated. Nor has the difference between an ingenious repurposing of material and its fraudulent copy ever been particularly hard to spot. And though there will always be edge cases, that, surely, is where the law steps in, codifying natural justice in a way useful to sincere litigants. So you would think.

Alexandre Montagu, an intellectual property lawyer, and David Bellos, a literary academic, think otherwise. Their forensic, fascinating history of copyright reveals a highly contingent history — full of ambiguity and verbal sophistry, as meanings shift and interests evolve.

The idea of copyright arose from state control of the media. This arose in response to the advent of cheap unregulated printing, which had fostered the creation and circulation of “scandalous, false and politically dangerous trash”. (That social media have dragged us back to the 17th century is a point that hardly needs rehearsing.)

In England, the Licensing of the Press Act of 1662 gave the Stationer’s Company an exclusive right to publish books. Wisely, such a draconian measure expired after a set term, and in 1710 the Statute of Anne established a rather more author-friendly arrangement. Authors would “own” their own work for 28 years — they would possess it, and they would have to answer for it. They could also assign their rights to others to see that this work was disseminated. Publishers, being publishers, assumed such rights then belonged to them in perpetuity, making what Daniel Defoe called a “miserable Havock” of authors’ rights law that pertains to this day.

True copyright was introduced in 1774, and the term over which an author has rights over their own work has been extended year on year; in most territories, it now covers the author’s lifetime plus seventy years. The definition of an “author” has been widened, too, to include sculptors, song-writers, furniture makers, software engineers, calico printers — and corporations.

Copyright is like the cute baby chimp you bought at the fair that grows into a fully grown chimpanzee that rips your kid’s arms off. Recent decades, the authors claim, “have turned copyright into a legal machine that restores to modern owners of content the rights and powers that eighteenth-century publishers lost, and grants them wider rights than their predecessors ever thought of asking for.”

And don’t imagine for a second that these owners are artists. Bellos and Montagu trace all the many ways contemporary creatives and their families are forced into surrendering their rights to an industry that now controls between 8 and 12 per cent of the US economy and is, the authors say, “a major engine of inequality in the twenty-first century”.

Few predicted that 18th-century copyright, there to protect the interests of widows and orphans, would have evolved into an industry that in 1996 seriously tried to charge girl-scout camp organisers for singing “God Bless America” around the campfire; and actually has managed to assert in court that acts of singular human genius are responsible for everyday items ranging from sporks to inflatable banana costumes.

Modern copyright’s ability to sequester and exploit creations of every kind for three or four generations is, the authors say, the engine driving “the biggest money machine the world has seen”, and one of the more disturbing aspects of this development is the lack of accompanying public interest and engagement.

Bellos and Montagu have extracted an enormous amount of fun out of their subject, and have sauced their sardonic and playful prose with buckets full of meticulously argued bile. What’s not to love about a work of legal scholarship that dreams up “a song-and-dance number based on a film scene in Gone with the Wind performed in the Palace of Culture in Petropavlovsk” and how it “might well infringe The Rights Of The American Trust Bank Company”?

This is not a book about “information wanting to be free” or any such claptrap. It is about a whole legal field failing in its mandate, and about how easily the current dispensation around intellectual property could come crumbling down. It is also about how commonly held ideas of propriety and justice might build something better in place of our current ideas of “I.P.”. Bellos and Montagu’s challenge to intellectual property law is by turns sobering and cheering: doing better than this will hardly be rocket science.

The man who drew on the future

Reading The Culture: The Drawings by Iain M Banks for the Times, 9 December 2023

“If I can get it to 155mph, I’ll be happy,” said Banksie (“Banksie” to all-comers; never “Iain”), and he handed me his phone. On the screen, a frictionless black lozenge hung at an odd angle against mist-shrouded hills. It was, he said, his way of burning up some of the carbon he had been conscientiously saving.

The BMW came as a surprise, given Banks’s long-standing devotion to environmental causes. But then, this was a while ago, 2013, and we were not yet convinced that clutching our pearls and screaming at each other was the best way to deal with a hotter planet. It was still possible, in those days, to agree that Banksie was our friend and deserved whatever treat he wanted to get himself. He was, after all, dying.

When Iain Banks succumbed to gallbladder cancer he was 59 years old and thirty years into a successful career in the literary mainstream, He’d also written nine science fiction novels and a book of short stories. Recently reissued in a handsome uniform edition, these are set in a technically advanced utopian society called the Culture.

The Culture is a place where the perfect is never allowed to stand in the way of the good. The Culture means well, and knows full well that this will never be enough. The Culture strives to be better, and sometimes despairs of itself. The Culture makes mistakes, and does its level best to put them right.

Yes, the Culture is a Utopia, but only “on balance”, only “when everything is taken into account”. It’s utopian enough.

Banks filled the corners of this galaxy-spanning civilisation with real (mostly humanoid) people, and he let them be giddy, inconsistent, self-absorbed, and sometimes malign. He believed that with consciousness comes at least the potential for virtue. The very best of his characters can afford to fail sometimes, because here, forgiveness is possible and wisdom is worth pursuing.

His effort went largely unrecognised by the critics. It fed neither our solemnity nor our sense of our own importance. The Culture was a mirror in which we were encouraged to point and laugh at ourselves. The Culture was comic. (The sf writer Adam Roberts calls it sane; I’m pretty certain we’re talking about the same thing.) As a consequence, the Culture is loved more than it is admired.

The first glimmerings of The Culture appeared in the 1970s in North Queensferry, among a teenager’s doodlings: maps of alien archipelagos, sketches of spaceships and guns and castles and tanks. Lovingly reproduced in The Culture: The Drawings, out this month, Banks’s exquisitely drawn juvenalia chart the course of the Culture’s birth. Bit by bit, pencilled calculations start to crowd out the drawings. The alphabets of the Culture’s synthetic language “Marain” grow more and more stylised, before being pushed to the margins by strange doughnut figures describing the cosmology of a speculative universe. Components emerge that we recognise from the books themselves. Spaceships — a mile, ten miles, a hundred miles long — predominate.

The book is a bit of a revelation; while he was alive Banks kept this material to himself. He was far too good a writer ever to imagine that readers needed any of it. Thumping literalism was never his style. These were the visual props from which he constructed his literary tricks.

The Culture is a loose civilisation formed from half-a-dozen humanoid species and whatever machine intelligences they bring along — or by whom they are brought. Artificial “Minds” are very often seen to outperform and outclass their creators. Spaceships and space habitats here tend to nurture their living freight rather as I look after my cats — very well indeed, albeit with a certain condescension.

Spacetime is no barrier to the Culture’s gadding about, so its material resources are functionally infinite. Nostalgic value is therefore the only material value anyone bothers about. No-one and nothing lasts forever. Everyone in this world is mortal. The Culture is canny enough to realise that in this world of hard knocks, opportunities for curiosity and play are so rare as to be worth defending at all costs, while beliefs (and religious beliefs in particular) are mere defences against terror. With terror comes exploitation. In Surface Detail (2010) the Culture must somehow take to task a society that’s using a personality-backup technology to consign its ne’erdowells to virtual hells.

The great thing about the Culture — the brainchild of a lifelong and cheerful atheist — is that nothing and nobody is exploited.

Banks very roughly mapped The Culture’s story over 9000 years — more than enough time for humans on their unremarkable blue marble to merit least a footnote. (The Culture’s first visit to Earth in the 1970s causes mayhem in the 1989 short story “The State of the Art”.) Groups join the Culture and secede from it, argue, influence and cojole and (rarely but terribly) go to war with it. Countless species have left the Culture over the years, retreating to contemplate who-knows-what, or chiselling their way out of the normal universe altogether. Now and again a passing reference is made to some vast, never-before-suspected epoch of benign indifference or malign neglect.

Consider Phlebas (1987) set the series’ tone from the first, with a story of how a devout religious society comes up against the Culture, goes to war with it, and promptly implodes. The Culture is well-intentioned enough towards its Idiran foes, as it is towards everyone else — but who said good intentions were enough to avert tragedy?

The last Culture book, Hydrogen Sonata (2012), asks big questions about belief and meaning, many of them channeled through a subplot in which one person’s efforts to play a virtually impossible piece of music on a virtually impossible musical instrument play out against the ground of a society for whom her task is trivial and the music frankly bad.

My personal favourite is Excession. By 1996, you see, a significant number of us were begging Banks to kill the Culture. Its decency and its sanity were beginning to stick in our craw. We knew, in our heart of hearts, that the Culture was setting us a moral challenge of sorts, and this put us out of temper. Why don’t you break it? we said. Why don’t you humiliate it? Why don’t you reveal its rotten heart? Banks indulged us this far: he confronted the Culture with a void in space older than the universe itself. It was a phenomenon even the Culture couldn’t handle.

Such sideways approaches to depicting the perfect society are, of course, only sensible. In fiction, utopian happiness and personal fulfilment make fine goals, but rotten subject matter.

But Banks’s decision to stick to edge cases and intractable problems wasn’t just pragmatic. He knew the Culture was smug and safe, and he spent entire novels working out what might be done about this. He was committed to dreaming up a polis that could avoid the catastrophe of its own success, and what he came up with was a spacefaring society, free of resource constraints, devoted to hedonistic play at the centre, and fringed with all manner of well-meaning busy-work directed at cadet civilisations (like our own on Earth) deemed not yet mature enough to join the party.

“I think of the Culture as some incredibly rich lady of leisure who does good, charitable works,” Banks wrote in 1993; “she spends a lot of time shopping and getting her hair done, but she goes out and visits the poor people and takes them baskets of vegetables.”

It’s an odd-sounding Utopia, perhaps — but, when all’s said and done, not such a bad life.

An imaginary connection to the post office

Reading Troubled By Faith by Owen Davies, 22 November 2023

Readers of this magazine may recall how, in early 2020, 5G mobile technology got caught up in a conspiracy theory that saw cellphone towers being set on fire across Europe. But how many of us knew just how dated this delusional belief really was?

A medical note from 1889 reports on the plight of one Henry Staples, 59, who ”fancies telegraph wires are over his head” and “that messages are being sent to people as to his character”. A year later, 56-year-old Janet Sneddon from Glasgow presented with an imaginary wire “connecting her to the post office”.

Owen Davies is an historian of magic. Troubled by Faith is his account of how early clinicians met with, understood, and dealt with irrational belief.

In a book that never lets the big ideas get in the way of the always entertaining fine detail, he builds a cast-iron defence of the movement that saw asylums springing up across western Europe in the 1830s. There were certainly abuses — there still are — but asylums were also places of compassion and sensitivity. Nor, by the way, were you ever just dumped there. Half of all “incarcerated” patients left after less than a year, and most within two years.

19th-century asylum records have serious limitations — the patients’ own words are rarely recorded directly — but Davies’s examination reveals what he calls “an extraordinary cultural space where under one roof prophets, messiahs, the bewitched, and the haunted, wrestled with angels, devils, imps, and witches.”

Troubled by Faith is a complex, sometimes tragic and ultimately uplifting story of how intelligence, sympathy and good-will triumphed over clinical ignorance. Growing up imbued with the values of the Enlightenment, doctors like the pioneering French neurologist Jean-Martin Charcot and his pupil Sigmund Freud believed that magic was “a diseased survival of a benighted mediaeval past”, and that “madness” was therefore largely determined by history. Ignorance had spread superstition; superstition had fertilised irrational beliefs; and irrational beliefs were driving people into manias and insanias of one sort or another. If you could educate people into thinking rationally, mental wellbeing was bound to follow.

But irrational beliefs were not irreconcilable with modernity after all, Owens explains. Magical thinking, which ruins lives, is just a species of rule-of-thumb thinking, without which day-to-day life is impossible.

These have been humbling lessons for a discipline that, when it started, imagined the problems and mysteries it confronted could be cleared up in a generation.

Troubled by Faith is hardly the first book to be written about the very early years of psychology. But while most of these tend to plash about in the shallows of developing theory, Owens does everyone a tremendous favour by rolling up his sleeves and diving into the lived experiences of distressed and delusional people.

Owens explains how insanity diagnoses spread from behaviour to belief — in witches or fairies or ghosts, in divine or infernal visitations, or in new technologies operating in supernatural ways — and he explains how, by listening to and caring for patients, this diagnostic overreach was eventually corrected.

Many a contemporary observer found such a muddle infuriating, of course. In a long section about the legal culture around insanity, we meet the judge Baron George Bramwell who, we are told, “became something of a bogey figure in the psychiatric community as he routinely critiqued and dismissed medical expert witnesses and scoffed at the notion of moral insanity.”

With hindsight, and thanks to works as insightful as this one, we can afford to be more admiring of the effort to understand the mind, that enjoys reason, but does not need reason to be right.

Cutequake

Reading Irresistible by Joshua Paul Dale for New Scientist, 15 November 2023

The manhole covers outside Joshua Dale’s front door sport colourful portraits of manga characters. Hello Kitty, “now one of the most powerful licensed characters in the world”, appears on road-construction barriers at the end of his road, alongside various cute cartoon frogs, monkeys, ducks, rabbits and dolphins. Dale lives in Tokyo, epicentre of a “cutequake” that has conquered mass media (the Pokémon craze, begun in 1996, has become arguably the highest grossing media franchise of all time) and now encroaches, at pace, upon the wider civic realm. The evidence? Well for a start there are those four-foot-high cutified police-officer mannequins standing outside his local police station…

Do our ideas of and responses to cute have a behavioural or other biological basis? How culturally determined are our definitions of what is and is not cute? Why is the depiction of cute on the rise globally, and why, of all places, did cute originate (as Dale ably demonstrates) in Japan?

Dale makes no bones about his ambition: he wants to found a brand-new discipline: a field of “cute studies”. His efforts are charmingly recorded in this first-person account that tells us a lot (and plenty that is positive) about the workings of modern academia. Dale’s interdisciplinary field will combine studies of domestication and neoteny (the retention of juvenile features in adult animals), embryology, the history of art, the anthropology of advertising and any number of other disparate fields in an effort to explain why we cannot help grinning foolishly at hyper-simplified line drawings of kittens.

Cute appearances are merely heralds of cute behaviour, and it’s this behaviour — friendly, clumsy, open, plastic, inventive, and mischievous — that repays study the most. A species that plays together, adapts together. Play bestows a huge evolutionary advantage on animals that can afford never to grow up.

But there’s the sting: for as long as life is hard and dangerous, animals can’t afford to remain children. Adult bonobos are playful and friendly, but then, bonobos have no natural predators. Their evolutionary cousins the chimpanzees have much tougher lives. You might get a decent game of checkers out of a juvenile chimp, but with the adults it’s an altogether different story.

The first list of cute things (in The Pillow Book), and the first artistic depictions of gambolling puppies and kittens (in the “Scroll of Frolicking Animals”) come from Japan’s Heian period, running from 794 to 1185 – a four-century-long period of peace. So what’s true at an evolutionary scale seems to have a strong analogue in human history, too. In times of peace, cute encourages affiliation.

If I asked you to give me an example of something cut, you’d most likely mention a cub or kitten or other baby animal, but Dale shows that infant care is only the most emotive and powerful social engagement that cute can release. Cute is a social glue of much wider utility. “Cuteness offers another way of relating to the entities around us,” Dale writes; “its power is egalitarian, based on emotion rather than logic and on being friendly rather than authoritarian.”

Is this welcome? I’m not sure. There’s a clear implication here that cute can be readily weaponised — a big-eyed soft-play Trojan Horse, there to emotionally nudge us into heaven knows what groupthunk folly.

Nor, upon finishing the book, did I feel entirely comfortable with an aesthetic that, rather than getting us to take young people seriously, would rather reject the whole notion of maturity.

Dale, a cheerful and able raconteur, had written a cracking story here, straddling history, art, and some complex developmental science, and though he doesn’t say so, he’s more than adequately established that this is, after all, the way the world ends: not with a bang but a “D’awww!”