You’re being chased. You’re being attacked. You’re falling. You’re drowning

To mark the centenary of Surrealism, this article in the Telegraph

A hundred years ago, a 28-year-old French poet, art collector and contrarian called André Breton published a manifesto that called time on reason.

Eight years before, in 1916, Breton was a medical trainee stationed at a neuro-psychiatric army clinic in Saint-Dizier. He cared for soldiers who were shell-shocked, psychotic, hysterical and worse, and fell in love with the mind, and the lengths it would go to survive the impossible present.

Breton’s Manifesto of Surrealism was, then, an inquiry into how, “under the pretense of civilization and progress, we have managed to banish from the mind everything that may rightly or wrongly be termed superstition, or fancy.”

For Breton, surrealism’s sincerest experiments involved a sort of “psychic automatism” – using the processes of dreaming to express “the actual functioning of thought… in the absence of any control exercised by reason, exempt from any aesthetic or moral concern.” He asked: “Can’t the dream also be used in solving the fundamental questions of life?”

Many strange pictures appeared over the following century, as Breton’s fellow surrealists answered his challenge, and plumbed the depths of the unconscious mind. Their efforts – part of a long history of humans’ attempts to document and decode the dream world – can be seen in a raft of new exhibitions marking surrealism’s centenary, from the hybrid beasts of Leonora Carrington (on view at the Hepworth Wakefield’s Forbidden Territories), to the astral fantasies of Remedios Varo (included in the Centre Pompidou’s blockbuster Surrealism show.)
Yet, just as often, such images illustrate the gap between the dreamer’s experience and their later interpretation of it. Some of the most popular surrealist pictures – Dalí’s melting clocks, say, or Magritte’s apple-headed businessman – are not remotely dreamlike. Looking at such easy-to-read canvases is like having a dream explained, and that’s not at all the same thing.
The chief characteristic of dreams is that they don’t surprise or shock or alienate the person who’s dreaming – the dreamer, on the contrary, feels that their dream is inevitable. “The mind of the man who dreams,” Breton writes, “is fully satisfied by what happens to him. The agonizing question of possibility is no longer pertinent. Kill, fly faster, love to your heart’s content… Let yourself be carried along, events will not tolerate your interference. You are nameless. The ease of everything is priceless.”

Most physiologists and psychologists of the early 20th century would have agreed with him, right up until his last sentence. While the surrealists looked to dreams to reveal a mind beyond conciousness, scientists of the day considered them insignificant, because you can’t experiment on a dreamer, and you can’t repeat a dream.

Since then, others have joined the battle over the meaning – or lack of meaning – of our dreams. In 1977, Harvard psychiatrists John Allan Hobson and Robert McCarley proposed “random activation theory” ‘activation-synthesis theory’, in a rebuff to the psychoanalysts and their claim that dreams had meanings only accessible via (surprise, surprise) psychonalysis. Less an explanation, more an expression of exasperation, their theory held that certain parts of our brains concoct crazy fictions out of the random neural firings of the sleeping pons (a part of the brainstem).

It is not a bad theory. It might go some way to explaining the kind of hypnagogic imagery we experience when we doze, and that so delighted the surrealists. It might even bring us closer to actually reconstructing our dreams. For instance, we can capture the brain activity of a sleeper, using functional magnetic resonance imaging, hand that data to artificial intelligence software that’s been trained on about a million images, and the system will take a stab at what the dreamer is seeing in their dream. The Japanese neuroscientist Yukiyasu Kamitani made quite a name for himself when he tried this in 2012.

Six years later, at the Serpentine Gallery in London, artist Pierre Huyghe integrated some of this material into his show UUmwelt — and what an astonishing show it was, its wall screens full of bottles becoming elephants becoming screaming pigs becoming geese, skyscrapers, mixer taps, dogs, moles, bat’s wings…

But modelling an idea doesn’t make it true. Activation-synthesis theory has inspired some fantastic art, but it fails to explain one of the most important physiological characteristics of dreaming – the fact that dreams paralyse the dreamer.

***

Brains have an alarming tendency to treat dreams as absolutely real and to respond appropriately — to jump and punch when the dream says jump! and punch! Dreams, for the dreamer, can be very dangerous indeed.

The simplest evolutionary way to mitigate the risk of injury would have been to stop the dreamer from dreaming. Instead, we evolved a complex mechanism to paralyse ourselves while in the throes of our night-time adventures. 520 million years of brain evolution say that dreams are important and need protecting.

This, rather than the actual content of dreams, has driven research into the sleeping brain. We know now that dreaming involves many more brain areas, including the parietal lobes (involved in the representation of space) and the frontal lobes (responsible for decision-making, problem-solving, self-control, attention, speech production, language comprehension – oh, and working memory). Mice dream. Dogs dream. Platypuses, beluga whales and ostriches dream; so do penguins, chameleons, iguanas and cuttlefish.[

We’re not sure about turtles. Octopuses? Marine biologist David Scheel caught his snoozing pet octopus Heidi on camera, and through careful interpretation of her dramatic colour-shifts he came to the ingenious conclusion that she was enjoying an imaginary crab supper. The clip, from PBS’s 2019 documentary Octopus: Making Contact is on YouTube.

Heidi’s brain structure is nothing like our own. Still, we’re both dreamers. Studies of wildly different sleeping brains throw up startling convergences. Dreaming is just something that brains of all sorts have to do.

We’ve recently learned why.

The first clues emerged from sleep deprivation studies conducted in the late 1960s. Both Allan Rechtschaffen and William Dement showed that sleep deprivation leads to memory deficits in rodents. A generation later, and researchers including the Brazilian neuroscientist Sidarta Ribeiro were spending the 1990s unpicking the genetic basis of memory function. Ribiero himself found the first molecular evidence of Freud’s “day residue” hypothesis, which has it that the content of our dreams is often influenced by the events, thoughts, and feelings we experience during the day.

Ribeiro had his own fairly shocking first-hand experience of the utility of dreaming. In February 1995 he arrived in New York to start at doctorate at Rockefeller University. Shortly after arriving, he woke up unable to speak English. He fell in and out of a narcoleptic trance, and then, in April, woke refreshed and energised and able to speak English better than ever before. His work can’t absolutely confirm that his dreams saved him, but he and other researchers have most certainly established the link between dreams and memory. To cut a long story very short indeed: dreams are what memories get up to when there’s no waking self to arrange them.

Well, conscious thought alone is not fast enough or reliable enough to keep us safe in the blooming, buzzing confusion of the world. We also need fast, intuitive responses to critical situations, and we rehearse these responses, continually, when we dream. Collect dream narratives from around the world, and you will quickly discover (as literary scholar Jonathan Gottschall points out in his 2012 book The Storytelling Animal) that the commonest dreams have everything to do with life and death and have very little time for anything else. You’re being chased. You’re being attacked. You’re falling. You’re drowning. You’re lost, trapped, naked, hurt…

When lives were socially simple and threats immediate, the relevance of dreams was not just apparent; it was impelling. And let’s face it: a stopped clock is right at least twice a day. Living in a relatively simple social structure, afforded only a limited palette of dream materials to draw from, was it really so surprising that (according to the historian Suetonius) Rome’s first emperor Augustus found his rise to power predicted by dreams?

Even now, Malaysia’s indigenous Orang Asli people believe that by sharing their dreams, they are passing on healing communications from their ancestors. Recently the British artist Adam Chodzko used their practice as the foundation for a now web-based project called Dreamshare Seer, which uses generative AI to visualise and animate people’s descriptions of their dreams. (Predictably, his AI outputs are rather Dali-like.)

But humanity’s mission to interpret dreams has been eroded by a revolution in our style of living. Our great-grandparents could remember a world without artificial light. Now we play on our phones until bedtime, then get up early, already focused on a day that is, when push comes to shove, more or less identical to yesterday. We neither plan our days before we sleep, nor do we interrogate our dreams when we wake. Is it any wonder, then, that our dreams are no longer able to inspire us?

Growing social complexity enriches our dream lives, but it also fragments them. Last night I dreamt of selecting desserts from a wedding buffet; later I cuddled a white chicken while negotiating for a plumbing contract. Dreams evolved to help us negotiate the big stuff. Having conquered the big stuff (humans have been apex predators for around 2 million years), it is possible that we have evolved past the point where dreaming is useful, but not past the point where dreaming is dangerous.

Here’s a film you won’t have seen. Petrov’s Flu, directed by Kirill Serebrennikov, was due for limited UK release in 2022, even as Vladimir Putin’s forces were bumbling towards Kiev.

The film opens on our hero Petrov (Semyon Serzin), riding a trolleybus home across a snowbound Yekaterinburg. He overhears a fellow passenger muttering to a neighbour that the rich in this town all deserve to be shot.

Seconds later the bus stops, Petrov is pulled off the bus and a rifle is pressed into his hands. Street executions follow, shocking him out of his febrile doze…

And Petrov’s back on the bus again.

Whatever the director’s intentions were here, I reckon this is a document for our times. You see, Andre Breton wrote his manifesto in the wreckage of a world that had turned its machine tools into weapons, the better to slaughter itself — and did all this under the flag of the Enlightenment and reason.

Today we’re manufacturing new kinds of machine tools, to serve a world that’s much more psychologically adept. Our digital devices, for example, exploit our capacity for focused attention (all too well, in many cases).

So what of those devices that exist to make our sleeping lives better, sounder, and more enjoyable?

SleepScore Labs is using electroencephalography data to analyse the content of dreams. BrainCo has a headband interface that influences dreams through auditory and visual cues. Researchers at MIT have used a sleep-tracking glove called Dormio to much the same end. iWinks’s headband increases the likelihood of lucid dreaming.

It’s hard to imagine light installations, ambient music and scented pillows ever being turned against us. Then again, we remember the world the Surrealists grew up in, laid waste by a war that had turned its ploughshares into swords. Is it so very outlandish to suggest that tomorrow, we will be weaponising our dreams?

More believable than the triumph

Visiting In Event of Moon Disaster at the Sainsbury Centre, University of East Anglia, for the Telegraph, 16 February 2024

20:05 GMT on 20 July 1969: astronauts Neil Armstrong and Buzz Aldrin are aboard Apollo l1’s Lunar Command Module, dropping steadily towards the lunar surface in humankind’s first attempt to visit another world.

“Drifting to the right a little,” Buzz remarks — and then an alarm goes off, and then another, and another, until at last the transmission breaks down.

The next thing we see is a desk set in front of a blue curtain, and flanked by flags: the Stars and Stripes, and the Presidential seal. Richard Nixon, the US President, takes his seat and catches the eye of figures hovering off-screen: is everything ready?

And so he begins; it’s a speech no one can or will forget. It was written by his speechwriter, William Safire, as a contingency in the event that Buzz and Neil land on the Moon in a way that leaves them alive but doomed, stranded without hope of rescue in the Sea of Tranquility.

“These brave men… know that there is no hope for their recovery.” Nixon swallows hard. “But they also know that there is hope for Mankind in their sacrifice.”

From 17 February, Richard Nixon’s speech will play to visitors to the Sainsbury Centre in Norwich. They will watch it from the comfort of a 1960s-era sofa, in a living room decked out in such a way as to transport them back to that day, in June 1969, when two heroes found themselves doomed and alone and sure to die on the Moon.

Confronted with Nixon struggling to control his emotions on a period TV, they may well ask themselves if what they are seeing is real. The props are real, and so is the speech, marking and mourning the death of two American heroes. Richard Nixon is real, or as real as anyone can be on TV. His voice and gestures are his own (albeit — and we’ll come to this in a moment — strung together by generative computer algorithms).

Will anyone be fooled?

Not me. I can remember Apollo 11’s successful landing, and the crew’s triumphant return to Earth less than a week later, on 24 July. But, hang on — what, exactly, do I remember? I was two. If my parents had told me, over and over, that they had sat me down in front of TV coverage of the Kennedy assassination, I would probably have come to believe that, too. Memory is unreliable, and people are suggestible.

Jago Cooper includes the installation In Event of Moon Disaster in the Sainsbury Centre’s exhibition “What Is Truth”. Cooper, who directs the centre, wasn’t even born when Apollo 11 rose from the launchpad. Neither were the two filmmakers, Halsey Burgund and Francesca Panetta, who won a 2021 Emmy for In Event Of Moon Disaster in the category of Interactive Media Documentary. The bottom line here seems to be: the past exists only because we trust what others say about it.

Other exhibits in the “What is Truth?” season will come at the same territory from different angles. There are artworks about time and artworks about identity. In May, an exhibition entitled The Camera Never Lies uses war photography from a private collection, The Incite Project, to reveal how a few handfuls of images have shaped our narratives of conflict. This is the other thing to remember, as we contemplate a world awash with deepfakes and avatars: the truth has always been up for grabs.

Sound artist Halsey Burgund and artist-technologist Francesca Panetta recruited experts in Israel and Ukraine to help realise In Event Of Moon Disaster. Actor Louis Wheeler spent days in a studio, enacting Nixon’s speech; the President’s face, posture and mannerisms were assembled from archive footage of a speech about Vietnam.

President Nixon’s counterfactual TV eulogy was produced by the MIT Center for Advanced Virtuality to highlight the malleability of digital images. It’s been doing the rounds of art galleries and tech websites since 2019, and times have moved on to some degree. Utter the word “deepfake” today and you’re less likely to conjure up images of a devastated Richard Nixon as gossip about those pornographic deepfake images of Taylor Swift, viewed 27 million times in 19 hours when they were circulated this January on Twitter.

No-one imagines for second that Swift had anything to do with them, of course, so let’s be positive here: MIT’s message about not believing everything you see is getting through.

As a film about deepfakes, In Event of Moon Disaster is strangely reassuring. It’s a work of genuine creative brilliance. It’s playful: we feel warmer towards Richard Nixon in this difficult fictional moment than we probably ever felt about him in life. It’s educational: the speech, though it never had to be delivered (thank God), is real enough, an historical document that reveals how much was at stake on that day. And in a twisted way, the film is immensely respectful, singing the praises of extraordinary men in terms only tragedy can adequately articulate.

As a film about the Moon, though, In Event of Moon Disaster is a very different kettle of fish and frankly disturbing. You can’t help but feel, having watched it, that Burgund and Panetta’s synthetic moon disaster is more believable than Apollo’s actual, historical triumph.

The novelist Norman Mailer observed early on that “in another couple of years there will be people arguing in bars about whether anyone even went to the Moon.” And so it came to pass: claims that the moon landings were fake began the moment the Apollo missions ended in 1972.

The show’s curator Jago Cooper has a theory about this: “The Moon is such a weird bloody thing,” he says. “The idea that we merely pretended to walk about there is more believable than what actually happened. That’s the thing about our relationship with what we’re told: it has to be believable within our lived experience, or we start driving wedges into it that undermine its credibility.”

This raises a nasty possibility: that the more enormous our adventures, the less likely we are to believe them; and the crazier our world, the less attention we’ll pay to it. “Humankind cannot bear very much reality” said TS Eliot, and maybe we’re beginning to understand why.

For a start, we cannot bear too much information. The more we’re told about the world, the more we search for things that are familiar. In an essay accompanying the exhibition, curator Paul Luckraft finds us in thrall to confirmation bias “because we can’t see what’s new in the dizzying amount of text, image, video and audio fragments available to us.”

The deluge of information brought about by digital culture is already being weaponised — witness Trump’s former chief strategist Steve Bannon, who observed in 2018, ‘The real opposition is the media. And the way to deal with them is to flood the zone with shit.”
Even more disturbing: the world of shifting appearances ushered in by Bannon, Trump, Putin et al. might be the saving of us. In a recent book about the future of nuclear warfare, Deterrence under Uncertainty, RAND policy researcher Edward Geist conjures up a likely media-saturated future in which we all know full well that appearances are deceptive, but no-one has the faintest idea what is actually going on. Belligerents in such a world would never have to fire a shot in anger, says Geist, merely persuade the enemy that their adversary’s values are better than their own.

“Tricky Dick” Nixon would flourish in such a hyper-paranoid world, but then, so might we all. Imagine that perpetual peace is ours for the taking — so long as we abandon the faith in facts that put men on the Moon!

Fifty years ago you’d have struggled to find a anyone casting doubt on NASA’s achievement, that day in July 1969. Fifty years later, a YouGov poll found sixteen per cent of the British public believed the moon landing most likely never happened.

Deepfakes themselves aren’t the cause of such incredulity, but they have the potential to exacerbate it immeasurably — and this, says Halsey Burgund, is why he and Francesca Panetta were inspired to make In Event of Moon Disaster. “The hope of the project is to provide some simple awareness of this kind of technology, its ubiquity and out-there-ness,” he explains. “If we’ve made an aesthetically satisfying and emotional piece, so much the better — it’ll help people internalise the challenges facing us right now.” Though bullish in defence of the technology’s artistic possibilities, Burgund concedes that the harms it can wreak are real, and can be distributed at scale. (Ask Taylor Swift.) “It’s not as though intelligent people aren’t addressing these problems,” Burgund says. “But it takes a lot of time — and society can’t change that quickly.”

The man who drew on the future

Reading The Culture: The Drawings by Iain M Banks for the Times, 9 December 2023

“If I can get it to 155mph, I’ll be happy,” said Banksie (“Banksie” to all-comers; never “Iain”), and he handed me his phone. On the screen, a frictionless black lozenge hung at an odd angle against mist-shrouded hills. It was, he said, his way of burning up some of the carbon he had been conscientiously saving.

The BMW came as a surprise, given Banks’s long-standing devotion to environmental causes. But then, this was a while ago, 2013, and we were not yet convinced that clutching our pearls and screaming at each other was the best way to deal with a hotter planet. It was still possible, in those days, to agree that Banksie was our friend and deserved whatever treat he wanted to get himself. He was, after all, dying.

When Iain Banks succumbed to gallbladder cancer he was 59 years old and thirty years into a successful career in the literary mainstream, He’d also written nine science fiction novels and a book of short stories. Recently reissued in a handsome uniform edition, these are set in a technically advanced utopian society called the Culture.

The Culture is a place where the perfect is never allowed to stand in the way of the good. The Culture means well, and knows full well that this will never be enough. The Culture strives to be better, and sometimes despairs of itself. The Culture makes mistakes, and does its level best to put them right.

Yes, the Culture is a Utopia, but only “on balance”, only “when everything is taken into account”. It’s utopian enough.

Banks filled the corners of this galaxy-spanning civilisation with real (mostly humanoid) people, and he let them be giddy, inconsistent, self-absorbed, and sometimes malign. He believed that with consciousness comes at least the potential for virtue. The very best of his characters can afford to fail sometimes, because here, forgiveness is possible and wisdom is worth pursuing.

His effort went largely unrecognised by the critics. It fed neither our solemnity nor our sense of our own importance. The Culture was a mirror in which we were encouraged to point and laugh at ourselves. The Culture was comic. (The sf writer Adam Roberts calls it sane; I’m pretty certain we’re talking about the same thing.) As a consequence, the Culture is loved more than it is admired.

The first glimmerings of The Culture appeared in the 1970s in North Queensferry, among a teenager’s doodlings: maps of alien archipelagos, sketches of spaceships and guns and castles and tanks. Lovingly reproduced in The Culture: The Drawings, out this month, Banks’s exquisitely drawn juvenalia chart the course of the Culture’s birth. Bit by bit, pencilled calculations start to crowd out the drawings. The alphabets of the Culture’s synthetic language “Marain” grow more and more stylised, before being pushed to the margins by strange doughnut figures describing the cosmology of a speculative universe. Components emerge that we recognise from the books themselves. Spaceships — a mile, ten miles, a hundred miles long — predominate.

The book is a bit of a revelation; while he was alive Banks kept this material to himself. He was far too good a writer ever to imagine that readers needed any of it. Thumping literalism was never his style. These were the visual props from which he constructed his literary tricks.

The Culture is a loose civilisation formed from half-a-dozen humanoid species and whatever machine intelligences they bring along — or by whom they are brought. Artificial “Minds” are very often seen to outperform and outclass their creators. Spaceships and space habitats here tend to nurture their living freight rather as I look after my cats — very well indeed, albeit with a certain condescension.

Spacetime is no barrier to the Culture’s gadding about, so its material resources are functionally infinite. Nostalgic value is therefore the only material value anyone bothers about. No-one and nothing lasts forever. Everyone in this world is mortal. The Culture is canny enough to realise that in this world of hard knocks, opportunities for curiosity and play are so rare as to be worth defending at all costs, while beliefs (and religious beliefs in particular) are mere defences against terror. With terror comes exploitation. In Surface Detail (2010) the Culture must somehow take to task a society that’s using a personality-backup technology to consign its ne’erdowells to virtual hells.

The great thing about the Culture — the brainchild of a lifelong and cheerful atheist — is that nothing and nobody is exploited.

Banks very roughly mapped The Culture’s story over 9000 years — more than enough time for humans on their unremarkable blue marble to merit least a footnote. (The Culture’s first visit to Earth in the 1970s causes mayhem in the 1989 short story “The State of the Art”.) Groups join the Culture and secede from it, argue, influence and cojole and (rarely but terribly) go to war with it. Countless species have left the Culture over the years, retreating to contemplate who-knows-what, or chiselling their way out of the normal universe altogether. Now and again a passing reference is made to some vast, never-before-suspected epoch of benign indifference or malign neglect.

Consider Phlebas (1987) set the series’ tone from the first, with a story of how a devout religious society comes up against the Culture, goes to war with it, and promptly implodes. The Culture is well-intentioned enough towards its Idiran foes, as it is towards everyone else — but who said good intentions were enough to avert tragedy?

The last Culture book, Hydrogen Sonata (2012), asks big questions about belief and meaning, many of them channeled through a subplot in which one person’s efforts to play a virtually impossible piece of music on a virtually impossible musical instrument play out against the ground of a society for whom her task is trivial and the music frankly bad.

My personal favourite is Excession. By 1996, you see, a significant number of us were begging Banks to kill the Culture. Its decency and its sanity were beginning to stick in our craw. We knew, in our heart of hearts, that the Culture was setting us a moral challenge of sorts, and this put us out of temper. Why don’t you break it? we said. Why don’t you humiliate it? Why don’t you reveal its rotten heart? Banks indulged us this far: he confronted the Culture with a void in space older than the universe itself. It was a phenomenon even the Culture couldn’t handle.

Such sideways approaches to depicting the perfect society are, of course, only sensible. In fiction, utopian happiness and personal fulfilment make fine goals, but rotten subject matter.

But Banks’s decision to stick to edge cases and intractable problems wasn’t just pragmatic. He knew the Culture was smug and safe, and he spent entire novels working out what might be done about this. He was committed to dreaming up a polis that could avoid the catastrophe of its own success, and what he came up with was a spacefaring society, free of resource constraints, devoted to hedonistic play at the centre, and fringed with all manner of well-meaning busy-work directed at cadet civilisations (like our own on Earth) deemed not yet mature enough to join the party.

“I think of the Culture as some incredibly rich lady of leisure who does good, charitable works,” Banks wrote in 1993; “she spends a lot of time shopping and getting her hair done, but she goes out and visits the poor people and takes them baskets of vegetables.”

It’s an odd-sounding Utopia, perhaps — but, when all’s said and done, not such a bad life.

“The white race cannot survive without dairy products”

Visiting Milk at London’s Wellcome Collection. For the Telegraph, 29 March 2023

So — have you ever drunk a mother’s milk? As an adult, I mean. Maybe you’re a body-builder, following an alternative health fad; maybe you’re a fetishist; or you happened to stumble into the “milk bar” operated now and again by performance artist Jess Dobkin, whose specially commissioned installation For What It’s Worth — an “unruly archive” of milk as product, labour and value —
brings the latest exhibition at London’s Wellcome Collection to a triumphant, chaotic and decidedly bling climax.

Why is breast milk such a source of anxiety, disgust, fascination and even horror? (In Sarah Pucill’s 1995 video Backcomb, on show here, masses of dark, animated hair slither across a white tablecloth, upturning containers of milk, cream and butter.)

Curators Marianne Templeton and Honor Beddard reckon our unease has largely to do with the way we have learned to associate milk almost entirely with cow’s milk, which we now consume on an industrial scale. It’s no accident that, as you enter their show, an obligatory Instagram moment is provided by Julia Bornefeld’s enormous hanging sculpture, suggestive at once of a cow’s udders and a human breast.

Milk is also about Whiteness. In “Butter. Vital for Growth and Health”, an otherwise unexceptionable pamphlet from the National Dairy Council in Chicago (one of the hundred or so objects rubbing shoulders here with artworks and new commissions), there’s a rather rather peculiar foreword by Herbert Hoover, the man who was to become the 31st U.S. President. “The white race,” Hoover writes, “cannot survive without dairy products.”

Say what?

Hoover (if you didn’t know) was put in charge of the American Relief Administration after the first World War, and saw to the food supply for roughly 300 million people in 21 countries in Europe and the Middle East. Even after government funding dried up, the ARA still managed to feed 25 to 35 million people during Russia’s famine of 1921-22 — which remains the largest famine relief operation in world history.

So when Hoover, who knows a lot about famine, says dairy is essential to the white race, he’s not being malign or sectarian; he believes this to be literally true — and this exhibition goes a very long way to explaining why.

Large portions of the world’s population react to milk the way my cat does, and for the same reason — they can’t digest the lactose. This hardly makes dairy a “White” food unless, like Hoover, your terms of reference were set by eugenics; or perhaps because, like some neo-Nazis in contemporary USA, you see your race in terminal decline, and whole milk as the only honest energy drink available in your 7-11. (Hewillnotdivide.us, Luke Turner’s 2017 video of drunk, out-of-condition MAGA fascists, chugging the white stuff and ranting on about purity, is the least assuming of this show’s artistic offerings, but easily the most compelling.)

Milk also asks how dairy became both an essential superfood and arguably the biggest source of hygiene anxiety in the western diet. Through industry promotional videos, health service leaflets, meal plans and a dizzying assortment of other ephemera, Milk explains how the choice to distribute milk at scale to a largely urban population led to the growth of an extraordinary industry, necessarily obsessed with disinfection and ineluctably driven toward narrow norms and centralised distribution; an industry that once had us convinced that milk is not just good for people, but is in fact essential (and hard cheese (sorry) to the hordes who can’t digest it).

The current kerfuffle around dairy and its vegan alternatives generates far more heat than light. If one show could pour oil on these troubled waters (which I doubt), it isn’t this one. No one will walk out of this show feeling comfortable. But they will have been royally entertained.

The sirens of overstatement

Visiting David Blandy’s installation Atomic Light at John Hansard Gallery, University of Southampton, for New Scientist, 22 March 2023

The Edge of Forever, one of four short films by Brighton-based video and installation artist David Blandy, opens with an elegaic pan of Cuckmere Haven in Sussex. A less apocalyptic landscape it would be hard to imagine. Cuckmere is one of the most ravishing spots in the Home Counties. Still, the voiceover insists that we contemplate “a ravaged Earth” and “forgotten peoples” as we watch two children exploring their post-human future. The only sign of former human habitation is a deserted observatory (the former Royal Observatory at Herstmonceux Castle in Sussex). The children enter and study the leavings of dead technologies and abandoned ambitions, steeped all the while in refracted sunlight: Claire Barrett’s elegiac camerawork is superb.

The films in Blandy’s installation “Atomic Light” connect three different kinds of fire: the fire of the sun; the wildfires that break out naturally all over the earth, but which are gathering force and frequency as the Earth’s climate warms; and the atomic blast that consumed the Japanese city of Hiroshima on 6 August 1945.

There’s a personal dimension to all this, beyond Blandy’s vaunted concern for the environment: his grandfather was a prisoner of the Japanese in Singapore during the second World War, and afterwards lived with the knowledge that, had upwards of 100,000 civilians not perished in Hiroshima blast, he almost certainly would not have survived.

Bringing this lot together is a job of work. In Empire of the Swamp
a man wanders through the mangrove swamps at the edge of Singapore, while Blandy reads out a short story by playwright Joel Tan. The enviro-political opinions of a postcolonial crocodile are as good a premise for a short story as any, I suppose, but the film isn’t particularly well integrated with the rest of the show.

Soil, Sinew and Bone, a visually arresting game of digital mirrors composed of rural footage from Screen Archive South East, equates modern agriculture and warfare. That there is an historical connection is undeniable: the chemist Franz Haber received the Nobel Prize in Chemistry in 1918 for his invention of the Haber–Bosch process, a method of synthesising ammonia from nitrogen and hydrogen. That ammonia, a fertiliser, can be used in the manufacture of explosives, is an irony familiar to any GCSE student, though it’s by no means obvious why agriculture should be left morally tainted by it.

Alas, Blandy can’t resist the sirens of overstatement. We eat, he says “while others scratch for existence in the baked earth.” Never mind that since 1970, hunger in the developing world has more than halved, and that China saw its hunger level fall from a quarter of its vast population to less than a tenth by 2016 — all overwhelmingly thanks to Haber-Bosch.

Defenders of the artist’s right to be miserable in face of history will complain that I am taking “Atomic Light” far to literally — to which I would respond that I’m taking it seriously. Bad faith is bad faith whichever way you cut it. If in your voiceover you dub Walt Disney’s Mickey “this mouse of empire”, if you describe some poor soul’s carefully tended English garden as the “pursuit of an unnatural perfection wreathed in poisons”, if you use footage of a children’s tea party to hector your audience about wheat and sugar, and if you cut words and images together to suggest that some jobbing farmer out shooting rabbits was a landowner on the lookout for absconding workers, then you are simply piling straws on the camel’s back.

Thank goodness, then, for Sunspot, Blandy’s fourth, visually much simpler film, that juxtaposes the lives and observations of two real-life solar astronomers, Joseph Hiscox in Los Angeles and Yukiaki Tanaka in Tokyo, who each made drawings of the sun on the day the Hiroshima bomb dropped.

Here’s a salutary and saving reminder that, to make art, you’re best off letting the truth speak for itself.

How to appropriate a plant

Visiting “Rooted Beings” at Wellcome Collection, London for the Telegraph, 24 March 2022

“Take a moment to draw a cosmic breath with your whole body, slower than any breath you have ever taken in your life.” Over headphones, Eduardo Navarro and philosopher Michael Marder guide my contemplation of Navarro’s drawings, where human figures send roots into the ground and reach with hands-made-leaves into the sky. They’re drawn with charcoal and natural pigments on envelopes containing the seeds of London plane trees. When the exhibition is over, the envelopes will be planted in a rite of burial and rebirth.

What are plants? Garden-centre curios? Magical objects? Medicines? Or trade goods? It’s hard for us to think of plants outside of the uses we put them to, and the five altars of Vegetal Matrix by Chilean artist Patricia Dominguez celebrate (if that is quite the word) their multiple social identities. One shrine contains a medicinal bark, quinine; in another, flowers of toxic Brugmansia, an assassin’s stock-in-trade; In the third sits a mandrake root, carved into the shape of a woman. Dominguez’s artistic research sits at the centre of a section of the exhibition entitled “Colonial violence and indigenous knowledge”.

Going by the show’s interpretative material, the narrowly extractive use of plants is a white western idea. But the most exciting exhibits reveal otherwise. From 400 CE there’s a fragment of the world’s earliest surviving herbal, painted on papyrus (we have always admired plants for what we could get out of them). Also from the Wellcome archives, there’s a complex map describing the vegetal “middle realm” of Jain cosmology — obviously a serious effort to establish an intellectual hold on the blooming and buzzing confusion of the plant world. Trees and their associated wildlife are reduced to deceptively simple and captivating shapes in the work on paper of the artist Joseca, whose people, the Yanomami, have been extracting foods and medicines from the Amazon rainforest for generations. His vivid plant portraits are not some classic Linnaean effort at the classification of species, but emotionally they’re not far off. Joseca is establishing categories, not tearing them down.

Bracketing the section about how imperial forces have “appropriated” useful plants (and thank goodness for that! cries the crabbed reviewer, thinking of his stomach as usual) are more introspective spaces. Ingela Ihrman’s enormous Passion Flower costume dominates the first room: time your visit just right, and you will find the artist inhabiting the flower, and may even get to drink her nectar. Not much less playful are the absurdist visions — in textile, embroidery and collage — of Gözde Ilkin, for whom categories (between human and plant, between plant and fungi) exist to be demolished, creating peculiar, and peculiarly endearing vegetal-anthropoid forms.

“Wilderness” is the theme of the final room. There’s real desperation in the RESOLVE Collective’s effort to knap and chisel their way towards a wild relationship with the urban environment. Made of broken masonry and pipework, crates and split paving slabs, this, perhaps, is a glimpse of the Hobbesian wilderness that civilisation keeps at bay.

Nearby, Den 3 is the artist SOP’s wry evocation of the old romantic mistake, cladding misanthropy in the motley of the greenwood. Rather than vegetate on the couch during the Covid-19 pandemic, SOP built a den in nearby woods and there enjoyed a sort of pint-size “Walden Pond” experience — until lockdown relaxed and others began visiting the wood.

At its simplest, Rooted Beings evokes a pleasant fantasy of human-vegetable co-existence. But forget its emolient exterior: at its best this show is deeply uncanny. The gulfs that exist between plant and animal, between species and species, between us and other, serve their own purposes, and attempts to do as Navarro and Marder suggest, and experience the world as a plant might experience it, are as likely to end in horror as in delight. “As you are very slowly dying while also staying alive,” they explain, “your body becomes the soil you are living in.” Crikey.

82.8 per cent perfect

Visiting Amazonia at London’s Science Museum for the Telegraph, 13 October 2021

The much-garlanded Brazilian photographer Sebastião Salgado is at London’s Science Museum to launch a seven-plus-years-in-the-making exhibition of photographs from Amazônia — and, not coincidentally, there’s barely a fortnight to go before the 26th United Nations Climate Change Conference convenes in Glasgow.

Salgado speaks to the urgency of the moment. We must save the Amazon rainforest for many reasons, but chiefly because the world’s rainfall patterns depend on it. We should stop buying Amazonian wood; we should stop buying beef fed on Amazonian soya; we should stop investing in companies who have interests in Amazonian mining.

There are only so many ways to say these things, and only so many times a poor mortal can hear them. On the face of it, Salgado’s enormous exhibition, set to an immersive soundscape by Seventies new-age pioneer Jean-Michel Jarre, sounds more impressive than impactful. Selgado is everyone’s idea of an engaged artist — his photographs of workers at the Serra Pelada gold mine in Brazil are world-famous — but is it even in us, now, to feel more concerned about the rainforest?

Turns out that it is. Jarre’s music plays a significant part in this show, curated and designed by Sebastiao’s wife Lelia Wanick Salgado. Assembled from audio archives in Geneva, it manages to be both politely ambient and often quite frightening in its dizzying assemblage of elemental roars (touches of Jóhann Jóhannsson, there), bird calls, forest sounds and human voices. And Selgado’s epic visions of the Amazon more than earn such Stürm und Drang.

This is not an exhibition about the 17.2 per cent of the rainforest that is already lost us. It’s not about logging companies or soy farms, gold mines or cattle ranches. It’s about what’s left. Ecologically the region’s losses are catastrophic; but there’s still plenty to save and, for a photographer, plenty to see.

Here, rendered in Selgado’s exquisitely detailed, thumpingly immediate monochrome, is Anavilhanas, the world’s largest freshwater archipelago, a wetland so complex and mutable, no-one has ever been able to settle there. There are mountains, “inselbergs”, rising out of the forest like volcanic islands in some fantastical South China Sea. There are bravura performances of the developer’s art: rivers turned to tin-foil, and leaves turned to photographic grain, and rainstorms turned to atom-bomb explosions, and clouds caught at angles that reveal what they truly are: airborn rivers. As they spill over the edge of Brazil, they dump more moisture into the Atlantic than the mighty Amazon itself.

Dotted about the exhibition space are oval “forest shelters”: dwellings for intimate portraits of twelve different forest peoples. Selgado acknowledges this anthropological effort merely scratches the surface: Amazonia’s 192 distinct groups constitute the most culturally and linguistically diverse region on the planet. Capturing and communicating that diversity conveys the scale of the region even better than those cloud shots.

The Ashaninka used to trade with the Incas. When the Spanish came, their supreme god Pawa turned all the wise men into animals to keep the region’s secrets. The highland Korubo (handy with a war club) became known as mud people, lathering themselves with the stuff against mosquitoes whenever they came down off their hill. The Zo’é place nuts in the mouths of the wild pigs they have killed so the meal can join in with its own feast. The Suruwahá quite happily consume the deadly spear-tip toxin timbó, figuring its better to die young and healthy (and many do).

The more we explore, the more we find it’s the profound and sometimes disturbing differences between these peoples that matter; not their surface exoticism. In the end, faced with such extraordinary diversity, we can only look in the mirror and admit our own oddness, and with it our kinship. We, too — this is the show’s deepest lesson — are, in every possible regard, like the playful, charming, touching, sometimes terrifying subjects of Selgado’s portraits, quite impossibly strange.

The Art of Conjecturing

Reading Katy Börner’s Atlas of Forecasts: Modeling and mapping desirable futures for New Scientist, 18 August 2021

My leafy, fairly affluent corner of south London has a traffic congestion problem, and to solve it, there’s a plan to close certain roads. You can imagine the furore: the trunk of every kerbside tree sports a protest sign. How can shutting off roads improve traffic flows?

The German mathematician Dietrich Braess answered this one back in 1968, with a graph that kept track of travel times and densities for each road link, and distinguished between flows that are optimal for all cars, and flows optimised for each individual car.

On a Paradox of Traffic Planning is a fine example of how a mathematical model predicts and resolves a real-world problem.

This and over 1,300 other models, maps and forecasts feature in the references to Katy Börner’s latest atlas, which is the third to be derived from Indiana University’s traveling exhibit Places & Spaces: Mapping Science.

Atlas of Science: Visualizing What We Know (2010) revealed the power of maps in science; Atlas of Knowledge: Anyone Can Map (2015), focused on visualisation. In her third and final foray, Börner is out to show how models, maps and forecasts inform decision-making in education, science, technology, and policymaking. It’s a well-structured, heavyweight argument, supported by descriptions of over 300 model applications.

Some entries, like Bernard H. Porter’s Map of Physics of 1939, earn their place thanks purely to their beauty and for the insights they offer. Mostly, though, Börner chooses models that were applied in practice and made a positive difference.

Her historical range is impressive. We begin at equations (did you know Newton’s law of universal gravitation has been applied to human migration patterns and international trade?) and move through the centuries, tipping a wink to Jacob Bernoulli’s “The Art of Conjecturing” of 1713 (which introduced probability theory) and James Clerk Maxwell’s 1868 paper “On Governors” (an early gesture at cybernetics) until we arrive at our current era of massive computation and ever-more complex model building.

It’s here that interesting questions start to surface. To forecast the behaviour of complex systems, especially those which contain a human component, many current researchers reach for something called “agent-based modeling” (ABM) in which discrete autonomous agents interact with each other and with their common (digitally modelled) environment.

Heady stuff, no doubt. But, says Börner, “ABMs in general have very few analytical tools by which they can be studied, and often no backward sensitivity analysis can be performed because of the large number of parameters and dynamical rules involved.”

In other words, an ABM model offers the researcher an exquisitely detailed forecast, but no clear way of knowing why the model has drawn the conclusions it has — a risky state of affairs, given that all its data is ultimately provided by eccentric, foible-ridden human beings.

Börner’s sumptuous, detailed book tackles issues of error and bias head-on, but she left me tugging at a still bigger problem, represented by those irate protest signs smothering my neighbourhood.

If, over 50 years since the maths was published, reasonably wealthy, mostly well-educated people in comfortable surroundings have remained ignorant of how traffic flows work, what are the chances that the rest of us, industrious and preoccupied as we are, will ever really understand, or trust, all the many other models which increasingly dictate our civic life?

Borner argues that modelling data can counteract misinformation, tribalism, authoritarianism, demonization, and magical thinking.

I can’t for the life of me see how. Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” What happens when a model reaches such complexity, only an expert can really understand it, or when even the expert can’t be entirely sure why the forecast is saying what it’s saying?

We have enough difficulty understanding climate forecasts, let alone explaining them. To apply these technologies to the civic realm begs a host of problems that are nothing to do with the technology, and everything to do with whether anyone will be listening.

Sod provenance

Is the digital revolution that Pixar began with Toy Story stifling art – or saving it? An article for the Telegraph, 24 July 2021

In 2011 the Westfield shopping mall in Stratford, East London, acquired a new public artwork: a digital waterfall by the Shoreditch-based Jason Bruges Studio. The liquid-crystal facets of the 12 metre high sculpture form a subtle semi-random flickering display, as though water were pouring down its sides. Depending on the shopper’s mood, this either slakes their visual appetite, or leaves them gasping for a glimpse of real rocks, real water, real life.

Over its ten-year life, Bruges’s piece has gone from being a comment about natural processes (so soothing, so various, so predictable!) to being a comment about digital images, a nagging reminder that underneath the apparent smoothness of our media lurks the jagged line and the stair-stepped edge, the grid, the square: the pixel, in other words.

We suspect that the digital world is grainier than the real, coarser, more constricted, and stubbornly rectilinear. But this is a prejudice, and one that’s neatly punctured by a new book by electrical engineer and Pixar co-founder Alvy Ray Smith, “A Biography of the Pixel”. This eccentric work traces the intellectual genealogy of Toy Story (Pixar’s first feature-length computer animation in 1995) over bump-maps and around occlusions, along traced rays and through endless samples, computations and transformations, back to the mathematics of the eighteenth century.

Smith’s whig history is a little hard to take — as though, say, Joseph Fourier’s efforts in 1822 to visualise how heat passed through solids were merely a way-station on the way to Buzz Lightyear’s calamitous launch from the banister rail — but it’s a superb short-hand in which to explain the science.

We can use Fourier’s mathematics to record an image as a series of waves. (Visual patterns, patterns of light and shade and movement, “can be represented by the voltage patterns in a machine,” Smith explains.) And we can recreate these waves, and the image they represent, with perfect fidelity, so long as we have a record of the points at the crests and troughs of each wave.

The locations of these high- and low-points, recorded as numerical coordinates, are pixels. (The little dots you see if you stare far too closely at your computer screen are not pixels; strictly speaking, they’re “display elements”.)

Digital media do not cut up the world into little squares. (Only crappy screens do that). They don’t paint by numbers. On the contrary, they faithfully mimic patterns in the real world.

This leads Smith to his wonderfully upside-down-sounding catch-line: “Reality,” he says, ”is just a convenient measure of complexity.”

Once pixels are converted to images on a screen, they can be used to create any world, rooted in any geometry, and obeying any physics. And yet these possibilities remain largely unexplored. Almost every computer animation is shot through a fictitious “camera lens”, faithfully recording a Euclidean landscape. Why are digital animations so conservative?

I think this is the wrong question: its assumptions are faulty. I think the ability to ape reality at such high fidelity creates compelling and radical possibilities of its own.

I discussed some of these possibilities with Paul Franklin, co-founder of the SFX company DNEG, and who won Oscars for his work on Christopher Nolan’s sci-fi blockbusters Interstellar (2014) and Inception (2010). Franklin says the digital technologies appearing on film sets in the past decade — from lighter cameras and cooler lights to 3-D printed props and LED front-projection screens — are positively disrupting the way films are made. They are making film sets creative spaces once again, and giving the director and camera crew more opportunities for on-the-fly creative decision making. “We used a front-projection screen on the film Interstellar, so the actors could see what visual effects they were supposed to be responding to,” he remembers. “The actors loved being able to see the super-massive black hole they were supposed to be hurtling towards. Then we realised that we could capture an image of the rotating black hole’s disc reflecting in Matthew McConaughey’s helmet: now that’s not the sort of shot you plan.”

Now those projection screens are interactive. Franklin explains: “Say I’m looking down a big corridor. As I move the camera across the screen, instead of it flattening off and giving away the fact that it’s actually just a scenic backing, the corridor moves with the correct perspective, creating the illusion of a huge volume of space beyond the screen itself.“

Effects can be added to a shot in real-time, and in full view of cast and crew. More to the point, what the director sees through their viewfinder is what the audience gets. This encourages the sort of disciplined and creative filmmaking Melies and Chaplin would recognise, and spells an end to the deplorable industry habit of kicking important creative decisions into the long grass of post-production.

What’s taking shape here isn’t a “good enough for TV” reality. This is a “good enough to reveal truths” reality. (Gargantua, the spinning black hole at Interstellar’s climax, was calculated and rendered so meticulously, it ended up in a paper for the journal Classical and Quantum Gravity.) In some settings, digital facsimile is becoming, literally, a replacement reality.

In 2012 the EU High Representative Baroness Ashton gave a physical facsimile of the burial chamber of Tutankhamun to the people of Egypt. The digital studio responsible for its creation, Factum Foundation, has been working in the Valley of the Kings since 2001, creating ever-more faithful copies of places that were never meant to be visited. They also print paintings (by Velasquez, by Murillo, by Raphael…) that are indistinguishable from the originals.

From the perspective of this burgeoning replacement reality, much that is currently considered radical in the art world appears no more than a frantic shoring-up of old ideas and exhausted values. A couple of days ago Damien Hirst launched The Currency, a physical set of dot paintings the digitally tokenised images of which can be purchased, traded, and exchanged for the real paintings.

Eventually the purchaser has to choose whether to retain the token, or trade it in for the physical picture. They can’t own both. This, says Hirst, is supposed to challenge the concept of value through money and art. Every participant is confronted with their perception of value, and how it influences their decision.

But hang on: doesn’t money already do this? Isn’t this what money actually is?

It can be no accident that non-fungible tokens (NFTs), which make bits of the internet ownable, have emerged even as the same digital technologies are actually erasing the value of provenance in the real world. There is nothing sillier, or more dated looking, than the Neues Museum’s scan of its iconic bust of Nefertiti, released free to the public after a complex three-year legal battle. It comes complete with a copyright license in the bottom of the bust itself — a copyright claim to the scan of a 3,000-year-old sculpture created 3,000 miles away.

Digital technologies will not destroy art, but they will erode and ultimately extinguish the value of an artwork’s physical provenance. Once facsimiles become indistinguishable from originals, then originals will be considered mere “first editions”.

Of course literature has thrived for many centuries in such an environment; why should the same environment damage art? That would happen only if art had somehow already been reduced to a mere vehicle for financial speculation. As if!

 

Dispersing the crowds

Considering the fate of museums and galleries under covid laockdown for New Scientist, 3 February 2021

In November 2020, the International Council of Museums estimated that 6.1 per cent of museums globally were resigned to permanent closure due to the pandemic. The figure was welcomed with enthusiasm: in May, it had reported nearly 13 per cent faced demise.

Something is changing for the better. This isn’t a story about how galleries and museums have used technology to save themselves during lockdowns (many didn’t try; many couldn’t afford to try; many tried and failed). But it is a story of how they weathered lockdowns and ongoing restrictions by using tech to future-proof themselves.

One key tool turned out to be virtual tours. Before 2020, they were under-resourced novelties; quickly, they became one of the few ways for galleries and museums to engage with the public. The best is arguably one through the Tomb of Pharaoh Ramses VI, by the Egyptian Tourism Authority and Cairo-based studio VRTEEK.

And while interfaces remain clunky, they improved throughout the year, as exhibition-goers can see in the 360-degree virtual tour created by the Museum of Fine Arts Ghent in Belgium to draw people through its otherwise-mothballed Van Eyck exhibition.

The past year has also forced the hands of curators, pushing them into uncharted territory where the distinctions between the real and the virtual become progressively more ambiguous.

With uncanny timing, the V&A in London had chosen Lewis Carroll’s Alice books for its 2020 summer show. Forced into the virtual realm by covid-19 restrictions, the V&A, working with HTC Vive Arts, created a VR game based in Wonderland, where people can follow their own White Rabbit, solve the caterpillar’s mind-bending riddles, visit the Queen of Hearts’ croquet garden and more. Curious Alice is available through Viveport; the real-world show is slated to open on 27 March.

Will museums grow their online experiences into commercial offerings? Almost all such tours are free at the moment, or are used to build community. If this format is really going to make an impact, it will probably have to develop a consolidated subscription service – a sort of arts Netflix or Spotify.

What the price point should be is anyone’s guess. It doesn’t help for institutions to muddy the waters by calling their video tours virtual tours.

But the advantages are obvious. The crowded conditions in galleries and museums have been miserable for years – witness the Mona Lisa, imprisoned behind bulletproof glass under low-level diffuse lighting and protected by barricades. Art isn’t “available” in any real sense when you can only spend 10 seconds with a piece. I can’t be alone in having staggered out of some exhibitions with no clear idea of what I had seen or why. Imagine if that was your first experience of fine art.

Why do we go to museums and galleries expecting to see originals? The Victorians didn’t. They knew the value of copies and reproductions. In the US in particular, museums lacked “real” antiquities, and plaster casts were highly valued. The casts aren’t indistinguishable from the original, but what if we produced copies that were exact in information as well as appearance? As British art critic Jonathan Jones says: “This is not a new age of fakery. It’s a new era of knowledge.”

With lidar, photogrammetry and new printing techniques, great statues, frescoes and chapels can be recreated anywhere. This promises to spread the crowds and give local museums and galleries a new lease of life. At last, they can become places where we think about art – not merely gawp at it.