The revolt against scentlessness has been gathering for a while. Muchembled namechecks avant garde perfumes with names like Bat and Rhinoceros. A dear friend of mine favours Musc Kublai Khan for its faecal notes. Another spends a small fortune to smell like cat’s piss. Right now I’m wearing Andy Tauer’s Orange Star — don’t approach unless you like Quality Street orange cremes macerated in petrol…
The fall from grace of Nicolas Fouquet, Louis XIV’s superintendant of finances, was spectacular and swift. In 1661 he held a fete to welcome the king to his gardens at Vaux-le-Vicomte. The affair was meant to flatter, but its sumptuousness only served to convince the absolutist monarch that Fouquet was angling for power. “On 17 August, at six in the evening Fouquet was the King of France,” Voltaire observed; “at two in the morning he was nobody.”
Soon afterwards, Fouquet’s gardens were grubbed up in an act, not of vandalism, but of expropriation: “The king’s men carefully packed the objects into crates and hauled them away to a marshy town where Louis was intent on building his own dream palace,” the Israeli-born US historian Amir Alexander tells us. “It was called Versailles.”
Proof! explains how French formal gardens reflected, maintained and even disseminated the political ideologies of French monarchs. from “the Affable” Charles VIII in the 15th century to poor doomed Louis XVI, destined for the guillotine in 1793. Alexander claims these gardens were the concrete and eloquent expression of the idea that “geometry was everywhere and structured everything — from physical nature to human society, the state, and the world.”
If you think geometrical figures are abstract artefacts of the human mind, think again. Their regularities turn up in the natural world time and again, leading classical thinkers to hope that “underlying the boisterous chaos and variety that we see around us there may yet be a rational order, which humans can comprehend and even imitate.”
It is hard for us now to read celebrations of nature into the rigid designs of 16th century Fontainebleau or the Tuileries, but we have no problem reading them as expressions of political power. Geometers are a tyrant’s natural darlings. Euclid spent many a happy year in Ptolemaic Egypt. King Hiero II of Syracuse looked out for Archimedes. Geometers were ideologically useful figures, since the truths they uncovered were static and hierarchical. In the Republic, Plato extols the virtues of geometry and advocates for rigid class politics in practically the same breath.
It is not entirely clear, however, how effective these patterns actually were as political symbols. Even as Thomas Hobbes was modishly emulating the logical structure of Euclid’s (geometrical) Elements in the composition of his (political) Leviathan (demonstrating, from first principles, the need for monarchy), the Duc de Saint-Simon, a courtier and diarist, was having a thoroughly miserable time of it in the gardens of Louis XIV’s Versailles: “the violence everywhere done to nature repels and wearies us despite ourselves,” he wrote in his diary.
So not everyone was convinced that Versailles, and gardens of that ilk, revealed the inner secrets of nature.
Of the strictures of classical architecture and design, Alexander comments that today, “these prescriptions seem entirely arbitrary”. I’m not sure that’s right. Classical art and architecture is beautiful, not merely for its antiquity, but for the provoking way it toys with the mechanics of visual perception. The golden mean isn’t “arbitrary”.
It was fetishized, though: Alexander’s dead right about that. For centuries, Versailles was the ideal to which Europe’s grand urban projects aspired, and colonial new-builds could and did out-do Versailles, at least in scale. Of the work of Lutyens and Baker in their plans for the creation of New Delhi, Alexander writes: “The rigid triangles, hexagons, and octagons created a fixed, unalterable and permanent order that could not be tampered with.”
He’s setting colonialist Europe up for a fall: that much is obvious. Even as New Delhi and Saigon’s Boulevard Norodom and all the rest were being erected, back in Europe mathematicians Janos Bolyai, Carl Friedrich Gauss and Bernhard Riemann were uncovering new kinds of geometry to describe any curved surface, and higher dimensions of any order. Suddenly the rigid, hierarchical order of the Euclidean universe was just one system among many, and Versailles and its forerunners went from being diagrams of cosmic order to being grand days out with the kids.
Well, Alexander needs an ending, and this is as good a place as any to conclude his entertaining, enlightening, and admirably well-focused introduction to a field of study that, quite frankly, is more rabbit-hole than grass.
I was in Washington the other day, sweating my way up to the Lincoln Memorial. From the top I measured the distance, past the needle of the Washington Monument, to Capitol Hill. Major Pierre Charles L’Enfant built all this: it’s a quintessential product of the Versailles tradition. Alexander calls it “nothing less than the Constitutional power structure of the United States set in stone, pavement, trees, and shrubs.”
For nigh-on 250 years tourists have been slogging from one end of the National Mall to the other, re-enacting the passion of the poor Duc de Saint-Simon in Versailles, who complained that “you are introduced to the freshness of the shade only by a vast torrid zone, at the end of which there is nothing for you but to mount or descend.”
Not any more, though. Skipping down the steps, I boarded a bright red electric Uber scooter and sailed electrically east toward Capitol Hill. The whole dignity-dissolving charade was made possible (and cheap) by map-making algorithms performing geometrical calculations that Euclid himself would have recognised. Because the ancient geometer’s influence on our streets and buildings hasn’t really vanished. It’s been virtualised. Algorithmized. Turned into a utility.
Now geometry’s back where it started: just one more invisible natural good.
Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019
Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.
Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”
Today, are we any nearer to understanding objectively how to tell right from wrong?
No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.
Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.
“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”
Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?
The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!
But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)
Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.
In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”
At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.
And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”
Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.
What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?
Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.
Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”
But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.
But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.
Reading New York journalist Andrew Blum’s new book has cured me of a foppish and annoying habit. I no longer dangle an umbrella off my arm on sunny days, tripping up my fellow commuters before (inevitably) mislaying the bloody thing on the train to Coulsdon Town. Very late, and to my considerable embarrassment, I have discovered just how reliable the weather forecast is.
My thoroughly English prejudice against the dark art of weather prediction was already set by the time the European Centre for Medium-Range Weather Forecasts opened in Reading in 1979. Then the ECMWF claimed to be able to see three days into the future. Six years later, it could see five days ahead. It knew about Sandy, the deadliest hurricane of 2012, eight days ahead, and it expects to predict high-impact events a fortnight before they happen by the year 2025.
The ECMWF is a world leader, but it’s not an outlier. Look at the figures: weather forecasts have been getting consistently better for 40 straight years. Blum reckons this makes the current global complex of machines, systems, networks and acronyms (and there are lots of acronyms) “a high point of science and technology’s aspirations for society”.
He knows this is a minority view: “The weather machine is a wonder we treat as a banality,” he writes: “a tool that we haven’t yet learned to trust.” The Weather Machine is his attempt to convey the technical brilliance and political significance of an achievement that hides in plain sight.
The machine’s complexity alone is off all familiar charts, and sets Blum significant challenge. “As a rocket scientist at the Jet Propulsion Laboratory put it to me… landing a spacecraft on Mars requires dealing with hundreds of variables,” he writes; “making a global atmospheric model requires hundreds of thousands.” Blum does an excellent job of describing how meteorological theory and observation were first stitched together, and why even today their relationship is a stormy one.
His story opens in heroic times, with Robert FitzRoy one of his more engaging heroes. Fitzroy is best remembered for captaining the HMS Beagle and weathering the puppyish enthusiasm of a young Charles Darwin. But his real claim to fame is as a meteorologist. He dreamt up the term “forecast”, turned observations into predictions that saved sailors’ lives, and foresaw with clarity what a new generation of naval observers would look like. Distributed in space and capable of communicating instantaneously with each other, they would be “as if an eye in space looked down on the whole North Atlantic”.
You can’t produce an accurate forecast from observation alone, however. You also need a theory of how the weather works. The Norwegian physicist Vilhelm Bjerknes came up with the first mathematical model of the weather: a set of seven interlinked partial differential equations that handled the fact that the atmosphere is a far from ideal fluid. Sadly, Bjerknes’ model couldn’t yet predict anything — as he himself said, solutions to his equations “far exceed the means of today’s mathematical analysis”. As we see our models of the weather evolve, so we see works of individual genius replaced by systems of machine computation. In the observational realm, something similar happens: the heroic efforts of individual observers throw up trickles of insight that are soon subsumed in the torrent of data streaming from the orbiting artefacts of corporate and state engineering.
The American philosopher Timothy Morton dreamt up the term “hyperobject” to describe things that are too complex and numinous to describe in the plain terms. Blum, whose earlier book was Tubes: Behind the Scenes at the Internet (2012), fancies his chances at explaining human-built hyperobjects in solid, clear terms, without recourse to metaphor and poesy. In this book, for example, he recognises the close affinity of military and meteorological infrastructures (the staple of many a modish book on the surveillance state), but resists any suggestion that they are the same system.
His sobriety is impressive, given how easy it is to get drunk on this stuff. In October 1946, technicians at the White Sands Proving Ground in Nevada installed a camera in the nose cone of a captured V2, and by launching it, yielded photographs of a quarter of the US — nearly a million square miles banded by clouds “stretching hundreds of miles in rows like streets”. This wasn’t the first time a bit of weather kit acted as an expendable test in a programme of weapons development, and it certainly wasn’t the last. Today’s global weather system has not only benefited from military advancements in satellite positioning and remote sensing; it has made those systems possible. Blum allows that “we learned to see the whole earth thanks to the technology built to destroy the whole earth”. But he avoids paranoia.
Indeed, he is much more impressed by the way countries at hammer and tongs with each other on the political stage nevertheless collaborated closely and well on a global weather infrastructure. Point four of John F Kennedy’s famous 1961 speech on “Urgent National Needs” called for “a satellite system for worldwide weather observation”, and it wasn’t just militarily useful American satellites he had in mind for the task: in 1962 Harry Wexler of the U.S. Weather Bureau worked with his Soviet counterpart Viktor Bugaev on a report proposing a “World Weather Watch”, and by 1963 there was, Blum finds, “a conscious effort by scientists — on both sides of the Iron Curtain, in all corners of the earth — to design an integrated and coordinated apparatus” — this at a time when weather satellites were so expensive they could be justified only on national security grounds.
Blum’s book comes a little bit unstuck at the end. A final chapter that could easily have filled a third of the book is compressed into just a few pages’ handwaving and special pleading, as he conjures up a vision of a future in which the free and global nature of weather information has ceased to be a given and the weather machine, that “last bastion of international cooperation”, has become just one more atomised ghost of a future the colonial era once promised us.
Why end on such a minatory note? The answer, which is by no means obvious, is to be found in Reading. Today 22 nations pay for the ECMWF’s maintenance of a pair of Cray supercomputers. The fastest in the world, these machines must be upgraded every two years. In the US, meanwhile, weather observations rely primarily on the health of four geostationary satellites, at a cost of 11 billion dollars. (America’s whole National Weather Service budget costs only around $1billion.)
Blum leaves open the question, How is an organisation built by nation-states, committed to open data and borne of a global view, supposed to work in a world where information lives on private platforms and travels across private networks — a world in which billions of tiny temperature and barometric sensors, “in smartphones, home devices, attached to buildings, buses or airliners,” are aggregated by the likes of Google, IBM or Amazon?
One thing is disconcertingly clear: Blum’s weather machine, which in one sense is a marvel of continuing modernity, is also, truth be told, a dinosaur. It is ripe for disruption, of a sort that the world, grown so reliant on forecasting, could well do without.
I have Arthur to thank for my earliest memory: being woken and carried into the living room on 20 July 1969 to see Neil Armstrong set foot on the moon.
Arthur is a satellite dish, part of the Goonhilly Earth Satellite Station in Cornwall. It carried the first ever transatlantic TV pictures from the USA to Europe. And now, in a fit of nostalgia, I am trying to build a cardboard model of the thing. The anniversary kit I bought comes with a credit-card sized Raspberry Pi computer that will cause a little red light to blink at the centre of the dish, every time the International Space Station flies overhead.
The geosychronous-satellite network that Arthur Clarke envisioned in 1945 came into being at the same time as men landed on the Moon. Intelsat III F-3 was moved into position over the Indian Ocean a few days before Apollo 11’s launch, completing the the world’s first geostationary-satellite network. The Space Race has bequeathed us a world steeped in fractured televisual reflections of itself.
Of Apollo itself, though, what actually remains? The Columbia capsule is touring the United States: it’s at Seattle’s Museum of Flight for this year’s fiftieth anniversary. And Apollo’s Mission Control Center in Houston is getting a makeover, its flight control consoles refurbished, its trash cans, book cases, ashtrays and orange polyester seat cushions all restored.
On the Moon there are some flags; some experiments, mostly expired; an abandoned car.
In space, where it matters, there’s nothing. The intention had been to build moon-going craft in orbit. This would have involved building a space station first. In the end, spooked by a spate of Soviet launches, NASA decided to cut to the chase, sending two small spacecraft up on a single rocket. One got three astronauts to the moon. The other, a tiny landing bug (standing room only) dropped two of them onto the lunar surface and puffed them back up into lunar orbit, where they rejoined the command module and headed home. It was an audacious, dangerous and triumphant mission — but it left nothing useful or reuseable behind.
In The Moon: A history for the future, science writer Oliver Morton observes that without that peculiar lunar orbital rendezvous plan, Apollo would at least have left some lasting infrastructure in orbit to pique someone’s ambition. As it was, “Every Apollo mission would be a single shot. Once they were over, it would be in terms of hardware — even, to a degree, in terms of expertise — as if they had never happened.”
Morton and I belong to the generation sometimes dubbed Apollo’s orphans. We grew up (rightly) dazzled by Apollo’s achievement. It left us, however, with the unshakable (and wrong) belief that our enthusiasm was common, something to do with what we were taught to call humanity’s “outward urge”. The refrain was constant: how in people there was this inborn desire to leave their familiar surroundings and explore strange new worlds.
Nonsense. Over a century elapsed between Columbus’s initial voyage and the first permanent English settlements. One of the more surprising findings of recent researches into the human genome is that, left to their own devices, people hardly move more than a few weeks’ walking distance from where they were born.
This urge, that felt so visceral, so essential to one’s idea of oneself: how could it possibly turn out to be the psychic artefact of a passing political moment?
Documentary makers Robert Stone and Alan Andres answer that particular question in Chasing the Moon, a tie in to their forthcoming series on PBS. It’s a comprehensive account of the Apollo project, and sends down deep roots: to the cosmist speculations of fin de siecle Russia, the individualist eccentricities of Germanys’ Verein fur Raumschiffart (Space Travel Society), and the deceptively chummy brilliance of the British Interplanetary Society, who used to meet in the pub.
The strength of Chasing the Moon lies not in any startling new information it divulges (that boat sailed long ago) but in the connections it makes, and the perspectives it brings to bear. It is surprising to find the New York Times declaring, shortly after the Bay of Pigs fiasco, that Kennedy isn’t nearly as interested in building a space programme as he should be. (“So far, apparently, no one has been able to persuade President Kennedy of the tremendous political, psychological, and prestige importance, entirely apart from the scientific and military results, of an impressive space achievement.”) And it is worthwhile to be reminded that, less than a month after his big announcement, Kennedy was trying to persuade Khrushchev to collaborate on the Apollo project, and that he approached the Soviets with the idea a second time, just days before his assassination in Dallas.
For Kennedy, Apollo was a strategic project, “a wonderful moral substitute for war ” (to slightly misapply Ray Bradbury’s phrase), and all to do with manned missions. NASA administrator James Webb, on the other hand, was a true believer. He could see no end to the good big organised government projects could achieve by way of education and science and civil development. In his modesty and dedication, Webb resembled no-one so much as the first tranche of bureaucrat-scientists in the Soviet Union. He never featured on a single magazine cover, and during his entire tenure he attended only one piloted launch from Cape Kennedy. (“I had a job to do in Washington,” he explained.)
The two men worked well enough together, their priorities dovetailing neatly in the role NASA took in promoting the Civil Rights Act and the Voting Rights Act and the government’s equal opportunities program. (NASA’s Saturn V designer, the former Nazi rocket scientist Wernher Von Braun, became an unlikely and very active campaigner, the New York Times naming him “one of the most outspoken spokesmen for racial moderation in the South.”) But progress was achingly slow.
At its height, the Apollo programme employed around two per cent of the US workforce and swallowed four per cent of its GDP. It was never going to be agile enough, or quotidian enough, to achieve much in the area of effecting political change. There were genuine attempts to recruit and train a black pilot for the astronaut programme. But comedian Dick Gregory had the measure of this effort: “A lot of people was happy that they had the first Negro astronaut, Well, I’ll be honest with you, not myself. I was kind of hoping we’d get a Negro airline pilot first.”
The big social change the Apollo program did usher in was television. (Did you know that failing to broadcast the colour transmissions from Apollo 11 proved so embarrassing to the apartheid government in South Africa that they afterwards created a national television service?)
But the moon has always been a darling of the film business. Never mind George Melie’s Trip to the Moon. How about Fritz Lang ordering a real rocket launch for the premiere of Frau im Mond? This was the film that followed Metropolis, and Lang roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre tall liquid-propellant rocket came to nought, Oberth set about building one eleven metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily they ran out of money.
The Verein für Raumschiffahrt was founded by men who had acted as scientific consultants on Frau im Mond. Von Braun became one of their number, before he was whisked away by the Nazis to build rockets for the war effort. Without Braun, the VfR grew nuttier by the year. Oberth, who worked for a time in the US after the war, went the same way, his whole conversation swallowed by UFOs and extraterrestrials and glimpses of Atlantis. When he went back to Germany, no-one was very sorry to see him go.
What is it about dreaming of new worlds that encourages the loner in us, the mooncalf, the cave-dweller, wedded to ascetism, always shying from the light?
After the first Moon landing, the philosopher (and sometime Nazi supporter) Martin Heidegger said in interview, “I at any rate was frightened when I saw pictures coming from the moon to the earth… The uprooting of man has already taken place. The only thing we have left is purely technological relationships. This is no longer the earth on which man lives.”
Heidegger’s worries need a little unpacking, and for that we turn to Morton’s cool, melancholy The Moon: A History for the Future. Where Stone and Anders collate and interpret, Morton contemplates and introspects. Stone and Anders are no stylists. Morton’s flights of informed fancy include a geological formation story for the moon that Von Trier’s film Melancholy cannot rival for spectacle and sentiment.
Stone and Anders stand with Walter Cronkite whose puzzled response to young people’s opposition to Apollo — “How can anybody turn off from a world like this?” — stands as an epitaph for Apollo’s orphans everywhere. Morton, by contrast, does understand why it’s proved so easy for us to switch off from the Moon. At any rate he has some good ideas.
Gertrude Stein, never a fan of Oakland, once wrote of the place, “There is no there there.” If Morton’s right she should have tried the Moon, a place whose details “mostly make no sense.”
“The landscape,” Morton explains, “may have features that move one into another, slopes that become plains, ridges that roll back, but they do not have stories in the way a river’s valley does. It is, after all, just the work of impacts. The Moon’s timescape has no flow; just punctuation.”
The Moon is Heidegger’s nightmare realised. It can never be a world of experience. It can only be a physical environment to be coped with technologically. It’s dumb, without a story of its own to tell, so much “in need of something but incapable of anything”, in Morton’s telling phrase, that you can’t even really say that it’s dead.
So why did we go there, when we already knew that it was, in the words of US columnist Milton Mayer, a “pulverised rubble… like Dresden in May or Hiroshima in August”?
Apollo was the US’s biggest, brashest entry in its heart-stoppingly exciting – and terrifying – political and technological competition with the Soviet Union. This is the matter of Stone and Anders’s Chasing the Moon, as a full a history as one could wish for, clear-headed about the era and respectful of the extraordinary efforts and qualities of the people involved.
But while Morton is no less moved by Apollo’s human adventure, we turn to his book for a cooler and more distant view. Through Morton’s eyes we begin to see, not only what the moon actually looks like (meaningless, flat, gentle, a South Downs gone horribly wrong) but why it conjures so much disbelief in those who haven’t been there.
A year after the first landing the novelist Norman Mailer joked: “In another couple of years there will be people arguing in bars about whether anyone even went to the Moon.” He was right. Claims that the moon landing were fake arose the moment the Saturn Vs stopped flying in 1972, and no wonder. In a deep and tragic sense Apollo was fake, in the sense that it didn’t deliver the world it had promised.
And let’s be clear here: the world it promised would have been wonderful. Never mind the technology: that was never the core point. What really mattered was that at the height of the Vietnam war, we seemed at last to have found that wonderful moral substitute for war. “All of the universe doesn’t care if we exist or not,” Ray Bradbury wrote, “but we care if we exist… This is the proper war to fight.”
Why has space exploration not united the world around itself? It’s easy to blame ourselves and our lack of vision. “It’s unfortunate,” Lyndon Johnson once remarked to the astronaut Wally Schirra, “but the way the American people are, now that they have developed all of this capability, instead of taking advantage of it, they’ll probably just piss it all away…” This is the mordant lesson of Stone and Andres’s otherwise uplifting Chasing the Moon.
Oliver Morton’s The Moon suggests a darker possibility: that the fault lies with the Moon itself, and, by implication, with everything that lies beyond our little home.
Morton’s Moon is a place defined by absences, gaps, and silence. He makes a poetry of it, for a while, he toys with thoughts of future settlement, he explores the commercial possibilities. In the end, though, what can this uneventful satellite of ours ever possibly be, but what it is: “just dry rocks jumbled”?
As global temperatures rise, and the mean sea-level with them, I have been tracing the likely flood levels of the Thames Valley, to see which of my literary rivals will disappear beneath the waves first. I live on a hill, and what I’d like to say is: you’ll be stuck with me a while longer than most. But on the day I had set aside to consume David Wallace-Wells’s terrifying account of climate change and the future of our species (there isn’t one), the water supply to my block was unaccountably cut off.
Failing to make a cup of tea reminded me, with some force, of what ought to be obvious: that my hill is a post-apocalyptic death-trap. I might escape the floods, but without clean water, food or power, I’ll be lucky to last a week.
The first half of The Uninhabitable Earth is organised in chapters that deal separately with famines, floods, fires, droughts, brackish oceans, toxic winds and war and all the other manifest effects of anthropogenic climate change (there are many more than four horsemen in this Apocalypse). At the same time, the author reveals, paragraph by paragraph, how these ever-more-frequent disasters join up in horrific cascades, all of which erode human trust to the point where civic life collapses.
The human consequences of climate disaster are going to be ugly. When a million refugees from the Syrian civil war started arriving in Europe in 2017, far-right parties entered mainstream political discourse for the first time in decades. By 2050, the United Nations predicts that Europe will host 200 million refugees. So buckle up. The disgust response with which we greet strangers on our own land is something we conscientiously suppress these days. But it’s still there: an evolved response that in less sanitary times got us through more than one plague.
That such truths go largely unspoken says something about the cognitive dissonance in which our culture is steeped. We just don’t have the mental tools to hold climate change in our heads. Amitav Ghosh made this clear enough in The Great Derangement (2016), which explains why the traditional novel is so hopeless at handling a world that has run out of normal, forgotten how to repeat itself, and will never be any sort of normal again.
Writers, seeking to capture the contemporary moment, resort to science fiction. But the secret, sick appeal of post-apocalyptic narratives, from Richard Jefferies’s After London on, is that in order to be stories at all their heroes must survive. You can only push nihilism so far. J G Ballard couldn’t escape that bind. Neither could Cormac McCarthy. Despite our most conscientious attempts at utter bloody bleakness, the human spirit persists.
Wallace-Wells admits as much. When he thinks of his own children’s future, denizens of a world plunging ever deeper into its sixth major extinction event, he admits that despair melts and his heart fills with excitement. Humans will cling to life on this ever less habitable earth for as long as they can. Quite right, too.
Wallace-Wells is deputy editor of New York magazine. In July 2017 he wrote a cover story outlining worst-case scenarios for climate change. His pessimism proved salutary: The Uninhabitable Earth has been much anticipated.
In the first half of the book the author channels former US vice-president Al Gore, delivering a blizzard of terrifying facts, and knocking socks off his predecessor’s An Inconvenient Truth (2006) not thanks to his native gifts (considerable as they are) but because the climate has deteriorated since then to the point where its declines can now be observed directly, and measured over the course of a human lifetime.
More than half the extra carbon dioxide released into the atmosphere by burning fossil fuels has been added in the past 30 years. This means that “we have done as much damage to the fate of the planet and its ability to sustain human life and civilization since Al Gore published his first book on climate than in all the centuries – all the millennia – that came before.” (4) Oceans are carrying at least 15 per cent more heat energy than they did in 2000. 22 per cent of the earth’s landmass was altered by humans just between 1992 and 2015. In Sweden, in 2018, forests in the Arctic Circle went up in flames. On and on like this. Don’t shoot the messenger, but “we have now engineered as much ruin knowingly as we ever managed in ignorance.”
The trouble is not that the future is bleak. It’s that there is no future. We’re running out of soil. In the United States, it’s eroding ten times faster than it is being replaced. In China and India, soil is disappearing thirty to forty times as fast. Wars over fresh water have already begun. The CO2 in the atmosphere has reduced the nutrient value of plants by about thirty per cent since the 1950s. Within the lifetimes of our children, the hajj will no longer be a feature of Islamic practice: the heat in Mecca will be such that walking seven times counterclockwise around the Kaaba will kill you.
This book may come to be regarded as last truly great climate assessment ever made. (Is there even time left to pen another?) Some of the phrasing will give persnickety climate watchers conniptions. (Words like “eventually” will be a red rag for them, because they catalyse the reader’s imagination without actually meaning anything.) But the research is extensive and solid, the vision compelling and eminently defensible.
Alas, The Uninhabitable Earth is also likely to be one of the least-often finished books of the year. I’m not criticising the prose, which is always clear and engaging and often dazzling. But It’s simply that the more we are bombarded with facts, the less we take in. Treating the reader like an empty bucket into which facts may be poured does not work very well, and even less well when people are afraid of what you are telling them. “If you have made it this far, you are a brave reader,” Wallace Wells writes on page 138. Many will give up long before then. Climate scientists have learned the hard way how difficult it is to turn fact into public engagement.
The second half of The Uninhabitable Earth asks why our being made aware of climate disaster doesn’t lead to enough reasonable action being taken against it. There’s a nuanced mathematical account to be written of how populations reach carrying capacity, run out of resources, and collapse; and an even more difficult book that will explain why we ever thought human intelligence would be powerful enough to elude this stark physical reality.
The final chapters of The Uninhabitable Earth provide neither, but neither are they narrowly partisan. Wallace-Wells mostly resists the temptation to blame the mathematical inevitability of our species’ growth and decline on human greed. The worst he finds to say about the markets and market capitalism – our usual stock villains – is not that they are evil, or psychopathic (or certainly no more evil or psychopathic than the other political experiments we’ve run in the past 150 years) but that they are not nearly as clever as we had hoped they might be. There is a twisted magnificence in the way we are exploiting, rather than adapting to the End Times. (Whole Foods in the US, we are told, is now selling “GMO-free” fizzy water.)
The Paris accords of 2016 established keeping warming to just two degrees as a global goal. Only a few years ago we were hoping for a rise of just 1.5 degrees. What’s the difference? According to the IPCC, that half-degree concession spells death for about 150 million people. Without significantly improved pledges, however, the IPCC reckons that instituting the Paris accords overnight (and no-one has) will still see us topping 3.2 degrees of warming. At this point the Antarctic’s ice sheets will collapse, drowning Miami, Dhaka, Shanghai, Hong Kong and a hundred other cities around the world. (Not my hill, though.)
And to be clear: this isn’t what could happen. This is what is already guaranteed to happen. Greenhouse gases work on too long a timescale to avoid it. “You might hope to simply reverse climate change;” writes Wallace-Wells: “you can’t. It will outrun all of us.”
“How widespread alarm will shape our ethical impulses toward one another, and the politics that emerge from those impulses,” says Wallace-Wells,”is among the more profound questions being posed by the climate to the planet of people it envelopes.”
My bet is the question will never tip into public consciousness: that, on the contrary, we’ll find ways, through tribalism, craft and mischief, to engineer what Wallace-Wells dubs “new forms of indifference”, normalising climate suffering, and exploiting novel opportunities, even as we live and more often die through times that will never be normal again.
In producer Jon Favreau’s career-making 1996 comedy film Swingers, Favreau himself plays Mike, a young man in love with love, and at war with the answerphones of the world.
“Hi,” says one young woman’s machine, “this is Nikki. Leave a message,” prompting Mike to work, flub after flub, through an entire, entirely fictitious, relationship with the absent Nikki.
“This just isn’t working out,” he sighs, on about his twentieth attempt to leave a message that’s neither creepy nor incoherent.” I — I think you’re great, but, uh, I — I… Maybe we should just take some time off from each other. It’s not you, it’s me. It’s what I’m going through.”
There are a couple of lessons in this scene, and once they’re learned, there’ll be no pressing need for you to read Jason Farman’s Delayed Response. (I think you’d enjoy reading him, quite a bit, but, in the spirit of this project, let those reasons wait till the end.)
First lesson of two: “non-verbal communication never stops; non-verbal cues are always being produced whether we want them to be or not.” Those in the know may recognise here Farman’s salute here to Edward T. Hall’s book The Silent Language (1980), for which Delayed Response is a useful foil. But the point — that any delay between transmission and reception is part of the message — is no mere intellectual nicety. Anyone who has had a love affair degenerate into an exchange of ever more flippant WhatsApp messages; or has waited for a prospective employer to get back to them about a job application, knows that silent time carries meaning.
Second lesson: delay can be used to manifest power. In Swingers, Mike crashes into what an elusive novelist friend of mine dubs, with gleeful malevolence, “the power of absence,” which is more or less the same power my teenage daughter wields when she “ghosts” some boy. In the words of the French sociologist Pierre Bourdieu, “Waiting is one of the privileged ways of experiencing the effect of power, and the link between time and power.” We’re none of us immune; we’re all in thrall to what Farman calls the “waiting economy”, and as our civics crumble (don’t pretend you haven’t noticed) the hucksters driving that economy get more and more brazen. (Consider, as an example, the growing discrepancy in UK delivery times between public and private postal services.)
Delays carry meanings. We cannot control them with any finesse; but we can use them as blunt weapons on each other.
What’s left for Farman to say?
Farman’s account of wait times is certainly exhaustive, running the full gamut of history, from contemporary Japanese smartphone messaging apps to Aboriginal message sticks that were being employed up to 50,000 years ago. (To give you some idea how venerable that communication system is, consider that papyrus dates from around 2900 BC.) He spans on-line wait times both so short as to be barely perceptible, and delays so long that they may be used to calculate the distance between planets. His examples are sometimes otherworldly (literally so in the case of the New Horizons mission to Pluto), sometimes unnervingly prosaic: He recounts the Battle of Fredericksburg in the American Civil War as a piling up of familiar and ordinary delays, conjuring up a picture of war-as-bureaucracy that is truly mortifying.
Farman acknowledges how much more quickly we send and receive messages these days — but his is no paean to technological progress. The dismal fact is: the instantaneous provision of information degrades our ability to interpret it. As long ago as 1966 the neurobiologist James L McGaugh reported that as the time increases between learning and testing, memory retention actually improves. And yet the purveyors of new media continue to equate speed with wisdom, promising that ever-better worlds will emerge from ever-more-efficient media. Facebook’s Mark Zuckerberg took this to an extreme typical of him in April 2017 when he announced that “We’re building further out beyond augmented reality, and that includes work around direct brain interfaces that one day will let you communicate using only your mind, although that stuff is pretty far out.”
This kind of hucksterism is harmless in itself, but it doesn’t come out of nothing. The thing we should be really afraid of is the creeping bureaucratisation of human experience. I remember, three years before Zuckerberg slugged back his own Kool-Aid, I sat listening to UCL neuroscientist Nilli Lavie lecturing about attention. Lavie was clearly a person of good will and good sense, but what exactly did she mean by her claim that wandering attention loses the US economy around two billion dollars a year? Were our minds to be perfectly focused, all year round, would that usher in some sort of actuarial New Jerusalem? Or would it merely extinguish all dreaming? Without a space for minds to wander in, where would a new idea – any new idea – actually come from?
This, of course, is the political flim-flam implicit in crisis thinking. So long as we are occupied with urgent problems, we are unable to articulate nuanced and far-reaching political ideas. “Waiting, ultimately, is essential for imagining that which does not yet exist and innovating on the knowledge we encounter,” Farman writes, to which I’m inclined to add the obvious point: Progressives are shrill and failing because their chosen media — Twitter and the like — deprive them of any register other than crisis-triggered outrage.
We may dream up a dystopia in which the populus is narcotised into bovine contentment by the instantaneous supply of undigested information, but as Farman makes clear, this isn’t going to happen. The anxiety generated by delay doesn’t disappear with quicker response times, it simply gets redistributed and reshaped. People under 34 years of age check their phones an average of 150 times a day, a burden entirely alien to soldiers waiting for Death and the postman at Fredericksburg. Farman writes: “Though the mythologies of the digital age continue to argue that we are eliminating waiting from daily life, we are actually putting it right at the centre of how we connect with one another.”
This has a major, though rarely articulated consequence for us: that anxiety balloons to fill the vacuum left by a vanished emotion: one we once considered pleasurable and positive. I refer, of course, to anticipation. Anticipation is no longer something we relish. This is because, in our world of immediate satisfactions, we’re simply not getting enough exposure to it. Waiting has ceased to be the way we measure projected pleasure. Now it’s merely an index of our frustration.
Farman is very good on this subject, and this is why Delayed Response is worth reading. (There: I told you we’d get around to this.) The book’s longueurs, and Farman’s persnickety academical style, pale beside his main point, very well expressed, that “the meaning of life isn’t deferred until that thing we hope for arrives; instead, in the moment of waiting, meaning is located in our ability to recognise the ways that such hopes define us.”
On 12 April 1961 Yuri Gagarin clambered into Vostok I, blasted into space, and became the first human being to orbit the earth.
“And suddenly I hear: Man is in space! My God! I stopped heating up the oven, sat next to the radio receiver, afraid to step away even for a minute.” This is the recollection of a 73-year-old woman from the Kuibyshev region, published in the state newspaper Izvestiia a little over a month later.
“We were always told that God is in the heavens, so how can a man fly there and not bump into Elijah the Prophet or one of God’s angels? What if God punishes him for his insolence?… Now I am convinced that God is Science, is Man.”
The opposition between religion and science set up in this letter is charmingly naive — as though a space capsule might shatter the crystal walls of heaven! But the official Soviet attitude to these matters was not much different. Lenin considered religion “merely a product and reflection of the economic yoke within society.” Religion was simply a vicious exploitation of uneducated peoples’ urge to superstition. As socialism developed, superstitions in general would disappear, and religions along with them.
The art theorist Aleksey Gan called the constructivist Moscow Planetarium, completed in the late 1920s, “a building in which religious services are held… until society grows to the level of a scientific understanding, and the instinctual need for spectacle comes up against the real phenomena of the world and technology.”
The assumption here — that religion evaporates as soon as people learn to behave rationally — was no less absurd at the time than it is now; how it survived as a political instinct over generations is going to take a lot of explanation. Victoria Smolkin, an associate professor at Wesleyan University, delivers, but with a certain flatness of style that can grate after a while.
By 1973, with 70 planetariums littering the urban landscape and an ideologically oblivious populace gearing itself for a religious revival, I found myself wishing that her gloves would come off.
For the people she is writing about, the stakes could not have been higher. What can be more important than the meaning of life? We are all going to die, after all, and everything we do in our little lives is going to be forgotten. Had we absolutely no convictions about the value of anything beyond our little lives, we would most likely stay in bed, starving and soiling ourselves. The severely depressed do exactly this, for they have grown pathologically realistic about their survival chances.
Cultures are engines of enchantment. They give us reasons to get up in the morning. They give us people, institutions, ideas, and even whole planes of magical reality to live for.
The 1917 Revolution’s great blow-hards were more than happy to accept this role for their revolutionary culture. “Let thousands of us die to resurrect millions of people all over the earth!’ exclaims Rybin in Maxim Gorky’s 1906 novel Mother. “That’s what: dying’s easy for the sake of resurrection! If only the people rise!’ And in his two-volume Religion and Socialism, the Commissar of Education Anatoly Lunacharsky prophesies the coming of a culture in which the masses will wiillingly “die for the common good… sacrificing to realise a state that starts not with his ‘I’ but with our ‘we’.”
Given the necessary freedom and funding, perhaps the early Soviet Union’s self-styled “God-builders” — Alexander Bogdanov, Leon Trotsky and the rest — might have cooked up a uniquely Soviet metaphysics, with rituals for birth, marriage and death that were worth the name. In suprematism, constructivism, cosmism, and all the other millenarian “-isms” floating about at the turn of the century, Russia had no shortage of fresh ingredients.
But Lenin’s lumpen anticlericals held the day. Bogdanov was sidelined and Trotsky was (literally) axed. Lenin looted the church. Stalin coopted the Orthodox faith to bolster patriotism in a time of war. Khrushchev criminalised all religions and most folk practices in the name of state unity. And none of them had a clue why the state’s materialism — even as it matured into a coherent philosophy — failed to replace the religion to which it was contrivedly opposed.
A homegrown sociology became possible under the fourth premier Leonid Brezhnev’s supposedly ossifying rule, and with it there developed something like a mature understanding of what religion actually was. These insights were painfully and often clumsily won — like Jack Skellington in Tim Burton’s The Nightmare Before Christmas. trying to understand the business of gift-giving by measuring all the boxes under the tree. And the understanding, when it came, came far too late.
The price of generations of off-again, on-again religious repression was not a confident secularism, or even a convulsive religious reaction. It was poison: ennui and cynicism and ideological indifference that proved impossible for the state to oppose because it had no structure, no leaders, no flag, no clergy or dogma.
In the end there was nothing left but to throw in the towel. Mikhail Gorbachev met with Patriarch Pimen on 29 April 1988, and embraced the millennium as a national celebration. Konstantin Kharchev, chair of the Council for Religious Affairs, commented: “The church has survived, and has not only survived, but has rejuvenated itself. And the question arises: which is more useful to the party — someone who believes in God, someone who believes in nothing at all, or someone who believes in both God and Communism? I think we should choose the lesser evil.” (234)
Cultures do not collapse from war or drought or earthquake. They fall apart because, as the archaeologist Joseph Tainter points out, they lose the point of themselves. Now the heavy lifting of this volume is done, let us hope Smolkin takes a breath and describes the desolation wrought by the institutions she has researched in such detail. There’s a warning here that we, deep in our own contemporary disenchantment, should heed.
The Orientalist painter Eugene Fromentin adored the silence of the Sahara. “Far from oppressing you,” he wrote to a friend, “it inclines you to light thoughts.” People assume that silence, being an absence of noise, is the auditory equivalent of darkness. Fromentin was having none of it: “If I may compare aural sensations to those of sight, then silence that reigns over vast spaces is more a sort of aerial transparency, which gives greater clarity to the perceptions, unlocks the world of infinitely small noises, and reveals a range of inexpressible delights.” (26-27)
Silence invites clarity. Norwegian explorer and publisher Erling Kagge seized on this nostrum for his international bestseller Silence in the Age of Noise, published in Norway in 2016, the same year Alain Corbin’s A History of Silence was published in France.
People forget this, but Kagge’s short, smart airport read was more tough-minded than the fad it fed. In fact, Kagge’s crowd-pleasing Silence and Corbin’s erudite History make surprisingly good travelling companions.
For instance: while Corbin was combing through Fromentin’s Un été dans le Sahara of 1856, Kagge was talking to his friend the artist Marina Abramovic, whose experience of desert silence was anything but positive: “Despite the fact that everything end completely quiet around her, her head was flooded with disconnected thoughts… It seemed like an empty emptiness, while the goal is to experience a full emptiness, she says.” (115)
Abramovic’s trouble, Kagge tells us, was that she couldn’t stop thinking. She wanted a moment of Fromentinesque clarity, but her past and her future proved insuperable obstacles: the moment she tore are mind away from one, she found herself ruminating over the other.
It’s a common complaint, according to Kagge: “The present hurts, and our response is to look ceaselessly for fresh purposes that draw our attention outwards, away from ourselves.” (37)
Why should this be? The answer is explored in Corbin’s book, which is one of those cultural histories which comes very close to becoming a virtually egoless compendium of quotations. Books of this sort can be a terrible mess but Corbin’s architecture, on the contrary, is as stable as it is artfully concealed. This is a temple: not a pile.
The present, properly attended to, alone and in silence, reveals time’s awful scale. When we think about the past or the future, what we’re actually doing is telling ourselves stories. It’s in the present moment, if we dare attend to it, that we glimpse the Void.
Jules Michelet, in his book La Montaigne (1872), recognised that the great forces shaping human destiny are so vast as to be silent. The process of erosion, for example, “is the more successfully accomplished in silence, to reveal, one morning, a desert of hideous nakedness, where nothing shall ever again revive.” The equivalent creative forces are hardly less awful: the “silent toil of the innumerably polyps” of a coral reef, for example, creating “the future Earth; on whose surface, perhaps, Man shall hereafter reside”.
No wonder so many of us believe in God, when sitting alone in a quiet room for ten minutes confronts us with eternity. The Spanish Basque theologian Ignatius Loyola used to spend seven solid hours a day in silent prayer and came to the only possible conclusion: silence “cancels all rational and discursive activity, thereby enabling direct perception of the divine word.”
“God bestows, God trains, God accomplishes his work, and this can only be done in the silence that is established between the Creator and the creature.” (42-43)
Obvious as this conclusion may be, though, it could still be wrong. Even the prophet Isaiah complained: “Verily thou art a God that hides thyself”.
What if God’s not there? What if sound and fury were our only bulwark against the sucking vacuum of our own meaninglessness? It’s not only the empty vessels that make the most noise: “Men with subtle minds who are quick to see every side of a question find it hard to refrain from expressing what they think,” said Eugene Delacroix. Anyway, “how is it possible to resist giving a favourable idea of one’s mind to a man who seems surprised and pleased to hear what one is saying?”
There’s a vibrancy in tumult, a civic value in conversation, and silence is by no means always golden — a subject Corbin explores ably and at length, carrying us satisfyingly far from our 2016-vintage hygge-filled comfort zone.
Silence suggests self-control, but self-control can itself be a response to oppression. Advice to courtiers to remain silent, or at least ensure that their words are more valuable than their silence, may at worst suggest a society where there is no danger in keeping quiet, but plenty of danger in speaking.
This has certainly been the case historically among peasants, immersed as they are in a style of life that has elements of real madness about it: an undercurrent of constant hate and bitterness expressed in feuding, bullying, bickering and family quarrels, the petty mentality, the self-deprecation, the superstition, the obsessive control of daily life by a strict authoritarianism. If you don’t believe me, read Zola. His peasant silences are “first and foremost a tactic” in a milieu where plans, ambitious or tragic, “were slow to come to fruition, which meant that it was essential not to show your hand.” (93)
Corbin’s history manages to anatomise silence without fetishising it. He leaves you with mixed feelings: a neat trick to pull, in a society that’s convinced it’s drowning in noise.
It’s not true. In Paris, Corbin reminds us, forges once operated on the ground floors of buildings throughout the city. Bells of churches, convents, schools and colleges only added to the cacophony. Carriages made the level of street noise even more deafening. (61) I was reminded, reading this, of how the Victorian computer pioneer Charles Babbage died, at the age of 79, at his home in Marylebone. Never mind the urinary tract infection: his last hours were spent being driven to distraction by itinerant hurdy-gurdy players pan-handling outside his window.
If anything, contemporary society suffers from too much control of its auditory environment. Passengers travelling in trains and trams observe each other in silence; a behaviour that was once considered damn rude. Pedestrians no longer like to be greeted. Among the many silences we may welcome, comes one that we surely must deplore: the silence that accompanies the diminution of our civic life.
In March 23 1969 the shipbuilders of Ulsteinvik in Norway launched a stern trawler called the Vesturvon. It was their most advanced factory trawler yet, beautiful as these ships go, and big: outfitted for a crew of 47.
In 2000, after many adventures, the ship suffered a midlife crisis. Denied a renewal of their usual fishing quota, its owners partnered up with a Russian company and sent the ship, renamed the Rubin, to ply the Barents Sea. There, in the words of Eskil Engdal and Kjetil Saeter, two Norwegian journalists, the ship slipped ineluctably into “a maelstrom of shell corporations, bizarre ships registers and shady expeditions”.
In the years that followed, the ship changed its name often: Kuko, Wuhan No 4, Ming No 5, Batu 1. Its crew had to look over the side of the ship at the name plate, attached that morning to the stern, to find out which ship they were on. Flags from countries such as Equatorial Guinea, Mauritania and Panama were kept in a cardboard box.
It fell to a Chilean, Luis Cataldo, to be captaining the ship (then named the Thunder) on December 17 2014 – the day when, off Antarctica’s windy Banzare Bank, in the middle of an illegal fishing expedition, it was spotted by the Bob Barker, a craft belonging to the Sea Shepherd Conservation Society. The Bob Barker’s captain got on the radio and told Cataldo his vessel was wanted by Interpol and should follow him to port.
Cataldo retorted that he wasn’t inclined to obey a ship whose black flag bore a skull (albeit with a shepherd’s crook and a trident instead of crossbones). And it is fair to say that the Sea Shepherd organisation, whose mission “is to end the destruction of habitat and slaughter of wildlife in the world’s oceans”, has enjoyed a fairly anomalous relationship with nautical authority since its foundation in 1977.
So began the world’s longest sea chase to date, recorded with flair and precision in Catching Thunder, Diane Oatley’s effortlessly noir translation of Engdal and Saeter’s 2016 Norwegian bestseller. The book promises all the pleasures of a crime novel, but it is after bigger game: let’s call it the unremitting weirdness of the real world.
This is a book about fish – and also a chase narrative in which the protagonists spend most of the time sailing in circles and sending each other passive-aggressive radio messages. (“You are worried about the crew, and now all the Indonesians are nervous,” Cataldo complains. “One person attempted to take his life. Over.”)
It’s about attempting to regulate the movement of lumps of steel weighing more than 650 tons which, if they want, can thug their way out of any harbour whether they’ve been “impounded” or not, and it’s about the sheer slow-mo clumsiness of ship-handling.
At one point the Thunder “moves in circles, directing a searchlight on the Bob Barker, then suddenly stops and drifts for a few hours. Then the mate puts the ship in motion again, heading for a point in the middle of nowhere.” There’s no Hollywood hot-headedness here. The violence here is rare, veiled and, when it comes, unstoppable and ice-cold.
The Thunder was wanted for hunting the Patagonian toothfish, a protected species of “petulant and repulsive” giants that can grow to a weight of 120kg and live more than 50 years. When the Bob Barker caught sight of it in the Southern Ocean, no one could have guessed that their chase would last for 110 days.
Stoked by Sea Shepherd’s YouTube campaign, the pursuit became a cause célèbre and the Bob Barker’s hardened crew were prepared for the long game: “As long as the two ships are operating without using the engines, it is only the generators that are consuming fuel. If it continues like this, they can be at sea for two years.”
Engdal and Saeter must keep their human story going while doing justice to the scale of their subject. At the start, their subject is the fishing industry, in which a cargo of frozen toothfish can go “on a circumnavigation of the world from the Southern Ocean to Thailand, then around the entire African continent, past the Horn of Africa, across the Indian Ocean and into the South China Sea before ending up in Vietnam.” But they also have something to say about the planet.
Suppose you catch fish for a living. If you saw that your catch was dwindling, you might limit your days at sea to ensure that you can continue to fish that species in future years. This isn’t “ecological thinking”; it’s simple self-interest. In the fishing industry, though, self-interest works differently.
And in a chapter about Chimbote in Peru, the authors hit upon a striking metonym for the global mechanisms denuding our seas.
The Peruvian anchovy boom of the late 2000s turned Chimbote from a sleepy village into Peru’s busiest fishing port. Fifty factories exuded a stench of rotten fish, and pumped wastewater and fish blood into the ocean, to the point where the local ecosystem was so damaged that an ordinary El Niño event finished off the anchovy stocks for good.
The point is this: fishing companies are not fisherfolk. They are companies: lumps of capital incorporated to maximise returns on investment. It makes no sense for an extraction company to limit its consumption of a resource.
Once stocks have been reduced to nothing, the company simply reinvests its capital in some other, more available resource. You can put rules in place to limit the rapaciousness of the enterprise, but the rapaciousness is baked in. Rare resources are doomed to extinction eventually because the rarer a resource is, the more expensive it is, and the more incentive there is to trade in it. This is why, past a certain point, rare stocks hurtle towards zero.
Politically savvy readers will find, between the lines, an account here of how increasingly desperate governments are coming to a rapprochement with the Sea Shepherd organisation, whose self-consciously piratical founder Paul Watson declared in 1988: “We hold the position that the laws of ecology take precedence over the laws designed by nation states to protect corporate interests.”
Watson’s position seems legally extreme. But 30 years on, with an ecological catastrophe looming, many maritime law enforcers hardly care. Robbed of income and ecological capital, some countries are getting gnarly. In 2016, Indonesian authorities sank 170 foreign fishing vessels in less than two years. They would like to sink many more: according to this daunting thriller, 5,000 illegal fishing vessels ply their waters at any one time.