The seeds of indisposition

Reading Ageless by Andrew Steele for the Telegraph, 20 December 2020

The first successful blood transfusions were performed in 1650, by the English physician Richard Lower, on dogs. The idea, for some while, was not that transfusions would save lives, but that they might extend them.

Turns out they did. The Philosophical Transactions of the Royal Society mentions an experiment in which “an old mongrel curr, all over-run with the mainge” was transfused with about fifteen ounces of of blood from a young spaniel and was “perfectly cured.”

Aleksandr Bogdanov, who once vied with Vladimir Lenin for control of the Bolsheviks (before retiring to write science fiction novels) brought blood transfusion to Russia, and hoped to rejuvenate various exhausted colleagues (including Stalin) by the method. On 24 March 1928 he mutually transfused blood with a 21-year-old student, suffered a massive transfusion reaction, and died, two weeks later, at the age of fifty-four.

Bogdanov’s theory was stronger than his practice. His essay on ageing speaks a lot of sense. “Partial methods against it are only palliative,” he wrote, “they merely address individual symptoms, but do not help fight the underlying illness itself.” For Bogdanov, ageing is an illness — unavoidable, universal, but no more “normal” or “natural” than any other illness. By that logic, ageing should be no less invulnerable to human ingenuity and science. It should, in theory, be curable.

Andrew Steele agrees. Steele is an Oxford physicist who switched to computational biology, drawn by the field of biogerontology — or the search for a cure for ageing. “Treating ageing itself rather than individual diseases would be transformative,” he writes, and the data he brings to this argument is quite shocking. It turns out that curing cancer would add less than three years to a person’s typical life expectancy, and curing heart disease, barely two, as there are plenty of other diseases waiting in the wings.

Is ageing, then, simply a statistical inevitability — a case of there always being something out there that’s going to get us?

Well, no. In 1825 Benjamin Gompertz, a British mathematician, explained that there are two distinct drivers of human mortality. There are extrinsic events, such as injuries or diseases. But there’s also an internal deterioration — what he called “the seeds of indisposition”.

It’s Steele’s job here to explain why we should treat those “seeds” as a disease, rather than a divinely determined limit. In the course of that explanation Steele gives us, in effect, a tour of the whole of human biology. It’s an exhilarating journey, but by no means always a pretty one: a tale of senescent cells, misfolded proteins, intracellular waste and reactive metals. Readers of advanced years, wondering why their skin is turning yellow, will learn much more here than they bargained for.

Ageing isn’t evolutionarily useful; but because it comes after our breeding period, evolution just hasn’t got the power to do anything about it. Mutations whose negative effects occur late in our lives accumulate in the gene pool. Worse, if they had a positive effect on our lives early on, then they will be actively selected for. Ageing, in other words, is something we inherit.

It’s all very well conceptualising old age as one disease. But if your disease amounts to “what happens to a human body when 525 million years of evolution stop working”, then you’re reduced to curing everything that can possibly go wrong, with every system, at once. Ageing, it turns out, is just thousands upon thousands of “individual symptoms”, arriving all at once.

Steele believes the more we know about human biology, the more likely it is we’ll find systemic ways to treat these multiple symptoms. The challenge is huge, but the advances, as Steele describes them, are real and rapid. If, for example, we can persuade senescent cells to die, then we can shed the toxic biochemical garbage they accumulate, and enjoy once more all the benefits of (among other things) young blood. This no fond hope: human trials of senolytics started in 2018.

Steele is a superb guide to the wilder fringes of real medicine. He pretends to nothing else, and nothing more. So whether you find Ageless an incredibly focused account, or just an incredibly narrow one, will come down, in the end, to personal taste.

Steele shows us what happens to us biologically as we get older — which of course leaves a lot of blank canvas for the thoughtful reader to fill. Steele’s forebears in this (frankly, not too edifying) genre have all to often claimed that there are no other issues to tackle. In the 1930s the surgeon Alexis Carrel declared that “Scientific civilization has destroyed the world of the soul… Only the strength of youth gives the power to satisfy physiological appetites and to conquer the outer world”.

Charming.

And he wasn’t the only one. Books like Successful Aging (Rowe & Kahn, 1998) and How and Why We Age (Hayflick, 1996) aspire to a sort of overweaning authority, not by answering hard questions about mortality, long life and ageing, but merely by denying a gerontological role for anyone outside their narrow specialism: philosophers, historians, theologians, ethicists, poets — all are shown the door.

Steele is much more sensible. He simply sticks to his subject. To the extent that he expresses a view, I am confident that he understands that ageing is an experience to be lived meaningfully and fully, as well as a fascinating medical problem to be solved.

Steele’s vision is very tightly controlled: he wants us to achieve “negligible senescence”, in which, as we grow older, we suffer no obvious impairments. What he’s after is a risk of death that stays constant no matter how old we get. This sounds fanciful, but it does happen in nature. Giant tortoises succumb to statistical inevitability, not decrepitude.

I have a fairly entrenched problem with books that treat ageing as a merely medical phenomenon. But I heartily recommend this one. It’s modest in scope, and generous in detail. It’s an honest and optimistic contribution to a field that tips very easily indeed into Tony Stark-style boosterism.

Life expectancy in the developed world has doubled from 40 in the 1800s to over 80 today. But it is in our nature to be always craving for more. One colourful outfit called Ambrosia is offering anyone over 35 the opportunity to receive a litre of youthful blood plasma for $8000. Steele has some fun with this: “At the time of writing,” he tells us, “a promotional offer also allows you to get two for $12000 — buy one, get one half-price.”

A fanciful belonging

Reading The Official History of Britain: Our story in numbers as told by the Office for National Statistics by Boris Starling with David Bradbury for The Telegraph, 18 October 2020

Next year’s national census may be our last. Opinions are being sought as to whether it makes sense, any longer, for the nation to keep taking its own temperature every ten years. Discussions will begin in 2023. Our betters may conclude that the whole rigmarole is outdated, and that its findings can be gleaned more cheaply and efficiently by other methods.

How the UK’s national census was established, what it achieved, and what it will mean if it’s abandoned, is the subject of The Official History of Britain — a grand title for what is, to be honest, a rather messy book, its facts and figures slathered in weak and irrelevant humour, most of it to do with football, I suppose as an intellectual sugar lump for the proles.

Such condescension is archetypally British; and so too is the gimcrack team assembled to write this book. There is something irresistibly Dad’s Army about the image of David Bradbury, an old hand at the Office of National Statistics, comparing dad jokes with novelist Boris Starling, creator of Messiah’s DCI Red Metcalfe, who was played on the telly by Ken Stott.

The charm of the whole enterprise is undeniable. Within these pages you will discover, among other tidbits, the difference between critters and spraggers, whitsters and oliver men. Such were the occupations introduced into the Standard Classification of 1881. (Recent additions include YouTuber and dog sitter.) Nostalgia and melancholy come to the fore when the authors say a fond farewell to John and Margaret — names, deeply unfashionable now, that were pretty much compulsory for babies born between 1914 and 1964. But there’s rigour, too; I recommend the author’s highly illuminating analysis of today’s gender pay gap.

Sometimes the authors show us up for the grumpy tabloid zombies we really are. Apparently a sizeable sample of us, quizzed in 2014, opined that 15 per cent of all our girls under sixteen were pregnant. The lack of mathematical nous here is as disheartening as the misanthropy. The actual figure was a still worryingly high 0.5 per cent, or one in 200 girls. A 10-year Teenage Pregnancy Strategy was created to tackle the problem, and the figure for 2018 — 16.8 conceptions per 1000 women aged between 15 and 17 — is the lowest since records began.

This is why census records are important: they inform enlightened and effective government action. The statistician John Rickman said as much in a paper written in 1796, but his campaign for a national census only really caught on two years later, when the clergyman Thomas Malthus scared the living daylights out of everyone with his “Essay on the Principle of Population”. Three years later, ministers rattled by Malthus’s catalogue of checks on the population of primitive societies — war, pestilence, famine, and the rest — peeked through their fingers at the runaway population numbers for 1801.

The population of England then was the same as the population of Greater London now. The population of Scotland was almost exactly the current population of metropolitan Glasgow.

Better to have called it “The Official History of Britons”. Chapter by chapter, the authors lead us (wisely, if not too well) from Birth, through School, into Work and thence down the maw of its near neighbour, Death, reflecting all the while on what a difference two hundred years have made to the character of each life stage.

The character of government has changed, too. Rickman wanted a census because he and his parliamentary colleagues had almost no useful data on the population they were supposed to serve. The job of the ONS now, the writers point out, “is to try to make sure that policymakers and citizens can know at least as much about their populations and economies as the internet behemoths.”

It’s true: a picture of the state of the nation taken every ten years just doesn’t provide the granularity that could be fetched, more cheaply and more efficiently, from other sources: “smaller surveys, Ordnance Survey data, GP registrations, driving licence details…”

But this too is true: near where I live there is a pedestrian crossing. There is a button I can push, to change the lights, to let me cross the road. I know that in daylight hours, the button is a dummy, that the lights are on a timer, set in some central office, to smooth the traffic flow. Still, I press that button. I like that button. I appreciate having my agency acknowledged, even in a notional, fanciful way.

Next year, 2021, I will tell the census who and what I am. It’s my duty as a citizen, and also my right, to answer how I will. If, in 2031, the state decides it does not need to ask me who I am, then my idea of myself as a citizen, notional as it is, fanciful as it is, will be impoverished.

Thoughts from a Camden bench

Reading The Lonely Century by Noreena Hertz for the Telegraph, 26 September 2020

Economist Noreena Hertz’s new book, about our increasing loneliness in a society geared to mutual exploitation, is explosive stuff. I would guess it was commissioned a while ago, then retrofitted for the Covid-19 pandemic — though I defy you to find any unsightly welding marks. Hertz is too good a writer for that, and her idea too timely, too urgent, and too closely argued to be upstaged by a mere crisis.

Loneliness is bad for our physical and mental health. Lack of social interaction makes us both labile and aggressive, a sucker for any Dionysian movement that offers us even a shred of belonging. These lines of argument precede Covid-19 by years, and have been used to explain everything from changing dating patterns among the young to Donald Trump’s win in 2016. But, goodness, what a highly salted dish they make today, now that we’re consigned to our homes and free association is curbed by law.

Under lockdown, we’re now even more reliant on the internet to maintain our working and personal identities. Here again, our immediate experiences sharpen Hertz’s carefully thought-out arguments. Social media is making us unhappy. It’s eroding our civility. It’s driving up rates of suicide. And so on — the arguments here are well-rehearsed, though Hertz’s synthesis is certainly compelling. “Connecting the world may be their goal,” she writes, thinking of the likes of Instagram and Tik Tok, “but it seems that if in the process connections become shallower, crueller or increasingly distorted, so be it.”

Now that we spend so much time indoors, we’re becoming ever more aware of how our outdoor civic space has been dismantled. And here is Hertz once again, waiting for us outside the shuttered wreck of an abandoned library. Actually libraries are the least of it: these days we don’t even get to exchange a friendly word with the supermarket staff who used to check out our shopping. And heaven help us if, on our way back to house arrest, we try to take the weight off our feet on one of those notorious municipal “Camden benches” — tiered and slanted blocks of concrete designed, not to allow public rest of any sort, but to keep “loiterers” moving.

Having led us, Virgil-like, through the circles of virtual and IRL Hell, Hertz ushers us into the purgatory she calls The Loneliness Economy (the capital letters are hers), and describes some of the ways the private sector has tried to address our cultural atomisation. Consider, for example, the way WeWork’s designers built the corridors and stairways in their co-working spaces deliberately narrow, so that users would have to make eye contact with each other. Such strategies are a bit “nudgy” for my taste, but sooner them than that bloody Camden bench.

The trouble with commercialising solutions for loneliness should be obvious: the lonelier we are, the more money this sector will make. So the underlying social drivers of loneliness will be ignored, and only the symptoms will be addressed. “They want to sell the benefit of working in close proximity with others, but with none of the social buy-in, the *hard work* that community requires.” Hertz writes, and wonders whether the people joining many of new, shiny, commercialised communities “have the time or lifestyles that community-building demands.”

Bringing such material to life necessarily means cherry-picking your examples. An impatient or hostile reader might take exception to the graduate student who spent so long curating her “Jen Goes Ziplining” Instagram post that she never actually went ziplining; also to the well-paid executive who lives out of his car and spends all his money on platonic cuddling services. The world is never short of foolish and broken people, and two swallows do not make a summer.

Still, I can’t see how Hertz’s account is harmed by such vividly rendered first-person research. And readers keen to see Hertz’s working have 130 pages of notes to work from.

More seriously, The Lonely Century suffers from the same limitation as Rutger Bregman’s recent (and excellent) Humankind, about how people are basically good. Let’s give each other a break, says Bregman. Let’s give each other the time of day, says Hertz. But neither author (and it’s not for want of trying) can muster much evidence to suggest the world is turning in the direction they would favour. Bregman bangs on about a school that’s had its funding cut; Hertz about a community cafe that’s closed after running out of money. There are other stories, lots of them, but they are (as Hertz herself concedes at one point) a bit “happy-clappy”.

One of Hertz’s many pre-publication champions calls her book “inspiring”. All it inspired in me, I’m afraid, was terror. The world is full of socially deprogrammed zombies. The Technological Singularity is here, the robots are us, and our open plan office-bound Slack messages constitute, at a global level, some great vegetative Overmind.

Hertz doesn’t go this far. But I do. Subtitled “coming together in a world that’s pulling apart”, The Lonely Century confirmed me in my internal exile, and had me bolting to door to the world even more firmly. I don’t even care that I am the problem Hertz is working so desperately hard to fix. I’m not coming out until it’s all over.

Know when you’re being played

Calling Bullshit by Jevin D West and Carl T Bergstrom, and Science Fictions by Stuart Ritchie, reviewed for The Telegraph, 8 August 2020

Last week I received a press release headlined “1 in 4 Brits say ‘No’ to Covid vaccine”. This is was such staggeringly bad news, I decided it couldn’t possibly be true. And sure enough, it wasn’t.

Armed with the techniques taught me by biologist Carl Bergstrom and data scientist Jevin West, I “called bullshit” on this unwelcome news, which after all bore all the hallmarks of clickbait.

For a start, the question on which the poll was based was badly phrased. On closer reading it turns out that 25 per cent would decline if the government “made a Covid-19 vaccine available tomorrow”. Frankly, if it was offered *tomorrow* I’d be a refusenik myself. All things being equal, I prefer my medicines tested first.

But what of the real meat of the claim — that daunting figure of “25 per cent”?  It turns out that a sample of 2000 was selected from a sample of 17,000 drawn from the self-selecting community of subscribers to a lottery website. But hush my cynicism: I am assured that the sample of 2000 was “within +/-2% of ONS quotas for Age, Gender, Region, SEG, and 2019 vote, using machine learning”. In other words, some effort has been made to make the sample of 2000 representative of the UK population (but only on five criteria, which is not very impressive. And that whole “+/-2%” business means that up to 40 of the sample weren’t representative of anything).

For this, “machine learning” had to be employed (and, later, “a proprietary machine learning system”)? Well, of course not.  Mention of the miracle that is artificial intelligence is almost always a bit of prestidigitation to veil the poor quality of the original data. And anyway, no amount of “machine learning” can massage away the fact that the sample was too thin to serve the sweeping conclusions drawn from it (“Only 1 in 5 Conservative voters (19.77%) would say No” — it says, to two decimal places, yet!) and is anyway drawn from a non-random population.

Exhausted yet? Then you may well find Calling Bullshit essential reading. Even if you feel you can trudge through verbal bullshit easily enough, this book will give you the tools to swim through numerical snake-oil. And this is important, because numbers easily slip  past the defences we put up against mere words. Bergstrom and West teach a course at the University of Washington from which this book is largely drawn, and hammer this point home in their first lecture: “Words are human constructs,” they say; “Numbers seem to come directly from nature.”

Shake off your naive belief in the truth or naturalness of the numbers quoted in new stories and advertisements, and you’re half way towards knowing when you’re being played.

Say you diligently applied the lessons in Calling Bullshit, and really came to grips with percentages, causality, selection bias and all the rest. You may well discover that you’re now ignoring everything — every bit of health advice, every over-wrought NASA announcement about life on Mars, every economic forecast, every exit poll. Internet pioneer Jaron Lanier reached this point last year when he came up with Ten Arguments for Deleting Your Social Media Accounts Right Now. More recently the best-selling Swiss pundit Rolf Dobelli has ordered us to Stop Reading the News. Both deplore our current economy of attention, which values online engagement over the provision of actual information (as when, for instance, a  review like this one gets headlined “These Two Books About Bad Data Will Break Your Heart”; instead of being told what the piece is about, you’re being sold on the promise of an emotional experience).

Bergstrom and West believe that public education can save us from this torrent of micro-manipulative blither. Their book is a handsome contribution to that effort. We’ve lost Lanier and Dobelli, but maybe the leak can be stopped up. This, essentially, is what the the authors are about; they’re shoring up the Enlightenment ideal of a civic society governed by reason.

Underpinning this ideal is science, and the conviction that the world is assembled on a bedrock of truth fundamentally unassailable truths.

Philosophical nit-picking apart, science undeniably works. But in Science Fictions Stuart Ritchie, a psychologist based at King’s College, shows just how contingent and gimcrack and even shoddy the whole business can get. He has come to praise science, not to bury it; nevertheless, his analyses of science’s current ethical ills — fraud, hype, negligence and so on — are devastating.

The sheer number of problems besetting the scientific endeavour becomes somewhat more manageable once we work out which ills are institutional, which have to do with how scientists communicate, and which are existential problems that are never going away whatever we do.

Our evolved need to express meaning through stories is an existential problem. Without stories, we can do no thinking worth the name, and this means that we are always going to prioritise positive findings over negative ones, and find novelties more charming than rehearsed truths.

Such quirks of the human intellect can be and have been corrected by healthy institutions at least some of the time over the last 400-odd years. But our unruly mental habits run wildly out of control once they are harnessed to a media machine driven by attention.  And the blame for this is not always easily apportioned: “The scenario where an innocent researcher is minding their own business when the media suddenly seizes on one of their findings and blows it out of proportion is not at all the norm,” writes Ritchie.

It’s easy enough to mount a defence of science against the tin-foil-hat brigade, but Ritchie is attempting something much more discomforting: he’s defending science against scientists. Fraudulent and negligent individuals fall under the spotlight occasionally, but institutional flaws are Ritchie’s chief target.

Reading Science Fictions, we see field after field fail to replicate results, correct mistakes, identify the best lines of research, or even begin to recognise talent. In Ritchie’s proffered bag of solutions are desperately needed reforms to the way scientific work is published and cited, and some more controversial ideas about how international mega-collaborations may enable science to catch up on itself and check its own findings effectively (or indeed at all, in the dismal case of economic science).

At best, these books together offer a path back to a civic life based on truth and reason. At worst, they point towards one that’s at least a little bit defended against its own bullshit. Time will tell whether such efforts can genuinely turning the ship around, or are simply here to entertain us with a spot of deckchair juggling. But there’s honest toil here, and a lot of smart thinking with it. Reading both, I was given a fleeting, dizzying reminder of what it once felt like to be a free agent in a factual world.

“Fat with smell, dissonant and dirty”

The revolt against scentlessness has been gathering for a while. Muchembled namechecks avant garde perfumes with names like Bat and Rhinoceros. A dear friend of mine favours Musc Kublai Khan for its faecal notes. Another spends a small fortune to smell like cat’s piss. Right now I’m wearing Andy Tauer’s Orange Star — don’t approach unless you like Quality Street orange cremes macerated in petrol…

Reading Robert Muchembled’s Smells: A Cultural History of Odours in Early Modern Times and Isabel Bannerman’s Scent Magic: Notes from a Gardener for the Telegraph, 18 April 2020

Tyrants and geometers

Reading Proof!: How the World Became Geometrical by Amir Alexander (Scientific American) for the Telegraph, 7 November 2019

The fall from grace of Nicolas Fouquet, Louis XIV’s superintendant of finances, was spectacular and swift. In 1661 he held a fete to welcome the king to his gardens at Vaux-le-Vicomte. The affair was meant to flatter, but its sumptuousness only served to convince the absolutist monarch that Fouquet was angling for power. “On 17 August, at six in the evening Fouquet was the King of France,” Voltaire observed; “at two in the morning he was nobody.”

Soon afterwards, Fouquet’s gardens were grubbed up in an act, not of vandalism, but of expropriation: “The king’s men carefully packed the objects into crates and hauled them away to a marshy town where Louis was intent on building his own dream palace,” the Israeli-born US historian Amir Alexander tells us. “It was called Versailles.”

Proof! explains how French formal gardens reflected, maintained and even disseminated the political ideologies of French monarchs. from “the Affable” Charles VIII in the 15th century to poor doomed Louis XVI, destined for the guillotine in 1793. Alexander claims these gardens were the concrete and eloquent expression of the idea that “geometry was everywhere and structured everything — from physical nature to human society, the state, and the world.”

If you think geometrical figures are abstract artefacts of the human mind, think again. Their regularities turn up in the natural world time and again, leading classical thinkers to hope that “underlying the boisterous chaos and variety that we see around us there may yet be a rational order, which humans can comprehend and even imitate.”

It is hard for us now to read celebrations of nature into the rigid designs of 16th century Fontainebleau or the Tuileries, but we have no problem reading them as expressions of political power. Geometers are a tyrant’s natural darlings. Euclid spent many a happy year in Ptolemaic Egypt. King Hiero II of Syracuse looked out for Archimedes. Geometers were ideologically useful figures, since the truths they uncovered were static and hierarchical. In the Republic, Plato extols the virtues of geometry and advocates for rigid class politics in practically the same breath.

It is not entirely clear, however, how effective these patterns actually were as political symbols. Even as Thomas Hobbes was modishly emulating the logical structure of Euclid’s (geometrical) Elements in the composition of his (political) Leviathan (demonstrating, from first principles, the need for monarchy), the Duc de Saint-Simon, a courtier and diarist, was having a thoroughly miserable time of it in the gardens of Louis XIV’s Versailles: “the violence everywhere done to nature repels and wearies us despite ourselves,” he wrote in his diary.

So not everyone was convinced that Versailles, and gardens of that ilk, revealed the inner secrets of nature.

Of the strictures of classical architecture and design, Alexander comments that today, “these prescriptions seem entirely arbitrary”. I’m not sure that’s right. Classical art and architecture is beautiful, not merely for its antiquity, but for the provoking way it toys with the mechanics of visual perception. The golden mean isn’t “arbitrary”.

It was fetishized, though: Alexander’s dead right about that. For centuries, Versailles was the ideal to which Europe’s grand urban projects aspired, and colonial new-builds could and did out-do Versailles, at least in scale. Of the work of Lutyens and Baker in their plans for the creation of New Delhi, Alexander writes: “The rigid triangles, hexagons, and octagons created a fixed, unalterable and permanent order that could not be tampered with.”

He’s setting colonialist Europe up for a fall: that much is obvious. Even as New Delhi and Saigon’s Boulevard Norodom and all the rest were being erected, back in Europe mathematicians Janos Bolyai, Carl Friedrich Gauss and Bernhard Riemann were uncovering new kinds of geometry to describe any curved surface, and higher dimensions of any order. Suddenly the rigid, hierarchical order of the Euclidean universe was just one system among many, and Versailles and its forerunners went from being diagrams of cosmic order to being grand days out with the kids.

Well, Alexander needs an ending, and this is as good a place as any to conclude his entertaining, enlightening, and admirably well-focused introduction to a field of study that, quite frankly, is more rabbit-hole than grass.

I was in Washington the other day, sweating my way up to the Lincoln Memorial. From the top I measured the distance, past the needle of the Washington Monument, to Capitol Hill. Major Pierre Charles L’Enfant built all this: it’s a quintessential product of the Versailles tradition. Alexander calls it “nothing less than the Constitutional power structure of the United States set in stone, pavement, trees, and shrubs.”

For nigh-on 250 years tourists have been slogging from one end of the National Mall to the other, re-enacting the passion of the poor Duc de Saint-Simon in Versailles, who complained that “you are introduced to the freshness of the shade only by a vast torrid zone, at the end of which there is nothing for you but to mount or descend.”

Not any more, though. Skipping down the steps, I boarded a bright red electric Uber scooter and sailed electrically east toward Capitol Hill. The whole dignity-dissolving charade was made possible (and cheap) by map-making algorithms performing geometrical calculations that Euclid himself would have recognised. Because the ancient geometer’s influence on our streets and buildings hasn’t really vanished. It’s been virtualised. Algorithmized. Turned into a utility.

Now geometry’s back where it started: just one more invisible natural good.

Pig-philosophy

Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.

The weather forecast: a triumph hiding in plain sight

Reading The Weather Machine by Andrew Blum (Bodley Head) for the Telegraph, 6 July 2019

Reading New York journalist Andrew Blum’s new book has cured me of a foppish and annoying habit. I no longer dangle an umbrella off my arm on sunny days, tripping up my fellow commuters before (inevitably) mislaying the bloody thing on the train to Coulsdon Town. Very late, and to my considerable embarrassment, I have discovered just how reliable the weather forecast is.

My thoroughly English prejudice against the dark art of weather prediction was already set by the time the European Centre for Medium-Range Weather Forecasts opened in Reading in 1979. Then the ECMWF claimed to be able to see three days into the future. Six years later, it could see five days ahead. It knew about Sandy, the deadliest hurricane of 2012, eight days ahead, and it expects to predict high-impact events a fortnight before they happen by the year 2025.

The ECMWF is a world leader, but it’s not an outlier. Look at the figures: weather forecasts have been getting consistently better for 40 straight years. Blum reckons this makes the current global complex of machines, systems, networks and acronyms (and there are lots of acronyms) “a high point of science and technology’s aspirations for society”.

He knows this is a minority view: “The weather machine is a wonder we treat as a banality,” he writes: “a tool that we haven’t yet learned to trust.” The Weather Machine is his attempt to convey the technical brilliance and political significance of an achievement that hides in plain sight.

The machine’s complexity alone is off all familiar charts, and sets Blum significant challenge. “As a rocket scientist at the Jet Propulsion Laboratory put it to me… landing a spacecraft on Mars requires dealing with hundreds of variables,” he writes; “making a global atmospheric model requires hundreds of thousands.” Blum does an excellent job of describing how meteorological theory and observation were first stitched together, and why even today their relationship is a stormy one.

His story opens in heroic times, with Robert FitzRoy one of his more engaging heroes. Fitzroy is best remembered for captaining the HMS Beagle and weathering the puppyish enthusiasm of a young Charles Darwin. But his real claim to fame is as a meteorologist. He dreamt up the term “forecast”, turned observations into predictions that saved sailors’ lives, and foresaw with clarity what a new generation of naval observers would look like. Distributed in space and capable of communicating instantaneously with each other, they would be “as if an eye in space looked down on the whole North Atlantic”.

You can’t produce an accurate forecast from observation alone, however. You also need a theory of how the weather works. The Norwegian physicist Vilhelm Bjerknes came up with the first mathematical model of the weather: a set of seven interlinked partial differential equations that handled the fact that the atmosphere is a far from ideal fluid. Sadly, Bjerknes’ model couldn’t yet predict anything — as he himself said, solutions to his equations “far exceed the means of today’s mathematical analysis”. As we see our models of the weather evolve, so we see works of individual genius replaced by systems of machine computation. In the observational realm, something similar happens: the heroic efforts of individual observers throw up trickles of insight that are soon subsumed in the torrent of data streaming from the orbiting artefacts of corporate and state engineering.

The American philosopher Timothy Morton dreamt up the term “hyperobject” to describe things that are too complex and numinous to describe in the plain terms. Blum, whose earlier book was Tubes: Behind the Scenes at the Internet (2012), fancies his chances at explaining human-built hyperobjects in solid, clear terms, without recourse to metaphor and poesy. In this book, for example, he recognises the close affinity of military and meteorological infrastructures (the staple of many a modish book on the surveillance state), but resists any suggestion that they are the same system.

His sobriety is impressive, given how easy it is to get drunk on this stuff. In October 1946, technicians at the White Sands Proving Ground in Nevada installed a camera in the nose cone of a captured V2, and by launching it, yielded photographs of a quarter of the US — nearly a million square miles banded by clouds “stretching hundreds of miles in rows like streets”. This wasn’t the first time a bit of weather kit acted as an expendable test in a programme of weapons development, and it certainly wasn’t the last. Today’s global weather system has not only benefited from military advancements in satellite positioning and remote sensing; it has made those systems possible. Blum allows that “we learned to see the whole earth thanks to the technology built to destroy the whole earth”. But he avoids paranoia.

Indeed, he is much more impressed by the way countries at hammer and tongs with each other on the political stage nevertheless collaborated closely and well on a global weather infrastructure. Point four of John F Kennedy’s famous 1961 speech on “Urgent National Needs” called for “a satellite system for worldwide weather observation”, and it wasn’t just militarily useful American satellites he had in mind for the task: in 1962 Harry Wexler of the U.S. Weather Bureau worked with his Soviet counterpart Viktor Bugaev on a report proposing a “World Weather Watch”, and by 1963 there was, Blum finds, “a conscious effort by scientists — on both sides of the Iron Curtain, in all corners of the earth — to design an integrated and coordinated apparatus” — this at a time when weather satellites were so expensive they could be justified only on national security grounds.

Blum’s book comes a little bit unstuck at the end. A final chapter that could easily have filled a third of the book is compressed into just a few pages’ handwaving and special pleading, as he conjures up a vision of a future in which the free and global nature of weather information has ceased to be a given and the weather machine, that “last bastion of international cooperation”, has become just one more atomised ghost of a future the colonial era once promised us.

Why end on such a minatory note? The answer, which is by no means obvious, is to be found in Reading. Today 22 nations pay for the ECMWF’s maintenance of a pair of Cray supercomputers. The fastest in the world, these machines must be upgraded every two years. In the US, meanwhile, weather observations rely primarily on the health of four geostationary satellites, at a cost of 11 billion dollars. (America’s whole National Weather Service budget costs only around $1billion.)

Blum leaves open the question, How is an organisation built by nation-states, committed to open data and borne of a global view, supposed to work in a world where information lives on private platforms and travels across private networks — a world in which billions of tiny temperature and barometric sensors, “in smartphones, home devices, attached to buildings, buses or airliners,” are aggregated by the likes of Google, IBM or Amazon?

One thing is disconcertingly clear: Blum’s weather machine, which in one sense is a marvel of continuing modernity, is also, truth be told, a dinosaur. It is ripe for disruption, of a sort that the world, grown so reliant on forecasting, could well do without.

“A wonderful moral substitute for war”

Reading Oliver Morton’s The Moon and Robert Stone and Alan Adres’s Chasing the Moon for The Telegraph, 18 May 2019

I have Arthur to thank for my earliest memory: being woken and carried into the living room on 20 July 1969 to see Neil Armstrong set foot on the moon.

Arthur is a satellite dish, part of the Goonhilly Earth Satellite Station in Cornwall. It carried the first ever transatlantic TV pictures from the USA to Europe. And now, in a fit of nostalgia, I am trying to build a cardboard model of the thing. The anniversary kit I bought comes with a credit-card sized Raspberry Pi computer that will cause a little red light to blink at the centre of the dish, every time the International Space Station flies overhead.

The geosychronous-satellite network that Arthur Clarke envisioned in 1945 came into being at the same time as men landed on the Moon. Intelsat III F-3 was moved into position over the Indian Ocean a few days before Apollo 11’s launch, completing the the world’s first geostationary-satellite network. The Space Race has bequeathed us a world steeped in fractured televisual reflections of itself.

Of Apollo itself, though, what actually remains? The Columbia capsule is touring the United States: it’s at Seattle’s Museum of Flight for this year’s fiftieth anniversary. And Apollo’s Mission Control Center in Houston is getting a makeover, its flight control consoles refurbished, its trash cans, book cases, ashtrays and orange polyester seat cushions all restored.

On the Moon there are some flags; some experiments, mostly expired; an abandoned car.

In space, where it matters, there’s nothing. The intention had been to build moon-going craft in orbit. This would have involved building a space station first. In the end, spooked by a spate of Soviet launches, NASA decided to cut to the chase, sending two small spacecraft up on a single rocket. One got three astronauts to the moon. The other, a tiny landing bug (standing room only) dropped two of them onto the lunar surface and puffed them back up into lunar orbit, where they rejoined the command module and headed home. It was an audacious, dangerous and triumphant mission — but it left nothing useful or reuseable behind.

In The Moon: A history for the future, science writer Oliver Morton observes that without that peculiar lunar orbital rendezvous plan, Apollo would at least have left some lasting infrastructure in orbit to pique someone’s ambition. As it was, “Every Apollo mission would be a single shot. Once they were over, it would be in terms of hardware — even, to a degree, in terms of expertise — as if they had never happened.”

Morton and I belong to the generation sometimes dubbed Apollo’s orphans. We grew up (rightly) dazzled by Apollo’s achievement. It left us, however, with the unshakable (and wrong) belief that our enthusiasm was common, something to do with what we were taught to call humanity’s “outward urge”. The refrain was constant: how in people there was this inborn desire to leave their familiar surroundings and explore strange new worlds.

Nonsense. Over a century elapsed between Columbus’s initial voyage and the first permanent English settlements. One of the more surprising findings of recent researches into the human genome is that, left to their own devices, people hardly move more than a few weeks’ walking distance from where they were born.

This urge, that felt so visceral, so essential to one’s idea of oneself: how could it possibly turn out to be the psychic artefact of a passing political moment?

Documentary makers Robert Stone and Alan Andres answer that particular question in Chasing the Moon, a tie in to their forthcoming series on PBS. It’s a comprehensive account of the Apollo project, and sends down deep roots: to the cosmist speculations of fin de siecle Russia, the individualist eccentricities of Germanys’ Verein fur Raumschiffart (Space Travel Society), and the deceptively chummy brilliance of the British Interplanetary Society, who used to meet in the pub.

The strength of Chasing the Moon lies not in any startling new information it divulges (that boat sailed long ago) but in the connections it makes, and the perspectives it brings to bear. It is surprising to find the New York Times declaring, shortly after the Bay of Pigs fiasco, that Kennedy isn’t nearly as interested in building a space programme as he should be. (“So far, apparently, no one has been able to persuade President Kennedy of the tremendous political, psychological, and prestige importance, entirely apart from the scientific and military results, of an impressive space achievement.”) And it is worthwhile to be reminded that, less than a month after his big announcement, Kennedy was trying to persuade Khrushchev to collaborate on the Apollo project, and that he approached the Soviets with the idea a second time, just days before his assassination in Dallas.

For Kennedy, Apollo was a strategic project, “a wonderful moral substitute for war ” (to slightly misapply Ray Bradbury’s phrase), and all to do with manned missions. NASA administrator James Webb, on the other hand, was a true believer. He could see no end to the good big organised government projects could achieve by way of education and science and civil development. In his modesty and dedication, Webb resembled no-one so much as the first tranche of bureaucrat-scientists in the Soviet Union. He never featured on a single magazine cover, and during his entire tenure he attended only one piloted launch from Cape Kennedy. (“I had a job to do in Washington,” he explained.)

The two men worked well enough together, their priorities dovetailing neatly in the role NASA took in promoting the Civil Rights Act and the Voting Rights Act and the government’s equal opportunities program. (NASA’s Saturn V designer, the former Nazi rocket scientist Wernher Von Braun, became an unlikely and very active campaigner, the New York Times naming him “one of the most outspoken spokesmen for racial moderation in the South.”) But progress was achingly slow.

At its height, the Apollo programme employed around two per cent of the US workforce and swallowed four per cent of its GDP. It was never going to be agile enough, or quotidian enough, to achieve much in the area of effecting political change. There were genuine attempts to recruit and train a black pilot for the astronaut programme. But comedian Dick Gregory had the measure of this effort: “A lot of people was happy that they had the first Negro astronaut, Well, I’ll be honest with you, not myself. I was kind of hoping we’d get a Negro airline pilot first.”

The big social change the Apollo program did usher in was television. (Did you know that failing to broadcast the colour transmissions from Apollo 11 proved so embarrassing to the apartheid government in South Africa that they afterwards created a national television service?)

But the moon has always been a darling of the film business. Never mind George Melie’s Trip to the Moon. How about Fritz Lang ordering a real rocket launch for the premiere of Frau im Mond? This was the film that followed Metropolis, and Lang roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre tall liquid-propellant rocket came to nought, Oberth set about building one eleven metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily they ran out of money.

The Verein für Raumschiffahrt was founded by men who had acted as scientific consultants on Frau im Mond. Von Braun became one of their number, before he was whisked away by the Nazis to build rockets for the war effort. Without Braun, the VfR grew nuttier by the year. Oberth, who worked for a time in the US after the war, went the same way, his whole conversation swallowed by UFOs and extraterrestrials and glimpses of Atlantis. When he went back to Germany, no-one was very sorry to see him go.

What is it about dreaming of new worlds that encourages the loner in us, the mooncalf, the cave-dweller, wedded to ascetism, always shying from the light?

After the first Moon landing, the philosopher (and sometime Nazi supporter) Martin Heidegger said in interview, “I at any rate was frightened when I saw pictures coming from the moon to the earth… The uprooting of man has already taken place. The only thing we have left is purely technological relationships. This is no longer the earth on which man lives.”

Heidegger’s worries need a little unpacking, and for that we turn to Morton’s cool, melancholy The Moon: A History for the Future. Where Stone and Anders collate and interpret, Morton contemplates and introspects. Stone and Anders are no stylists. Morton’s flights of informed fancy include a geological formation story for the moon that Von Trier’s film Melancholy cannot rival for spectacle and sentiment.

Stone and Anders stand with Walter Cronkite whose puzzled response to young people’s opposition to Apollo — “How can anybody turn off from a world like this?” — stands as an epitaph for Apollo’s orphans everywhere. Morton, by contrast, does understand why it’s proved so easy for us to switch off from the Moon. At any rate he has some good ideas.

Gertrude Stein, never a fan of Oakland, once wrote of the place, “There is no there there.” If Morton’s right she should have tried the Moon, a place whose details “mostly make no sense.”

“The landscape,” Morton explains, “may have features that move one into another, slopes that become plains, ridges that roll back, but they do not have stories in the way a river’s valley does. It is, after all, just the work of impacts. The Moon’s timescape has no flow; just punctuation.”

The Moon is Heidegger’s nightmare realised. It can never be a world of experience. It can only be a physical environment to be coped with technologically. It’s dumb, without a story of its own to tell, so much “in need of something but incapable of anything”, in Morton’s telling phrase, that you can’t even really say that it’s dead.

So why did we go there, when we already knew that it was, in the words of US columnist Milton Mayer, a “pulverised rubble… like Dresden in May or Hiroshima in August”?

Apollo was the US’s biggest, brashest entry in its heart-stoppingly exciting – and terrifying – political and technological competition with the Soviet Union. This is the matter of Stone and Anders’s Chasing the Moon, as a full a history as one could wish for, clear-headed about the era and respectful of the extraordinary efforts and qualities of the people involved.

But while Morton is no less moved by Apollo’s human adventure, we turn to his book for a cooler and more distant view. Through Morton’s eyes we begin to see, not only what the moon actually looks like (meaningless, flat, gentle, a South Downs gone horribly wrong) but why it conjures so much disbelief in those who haven’t been there.

A year after the first landing the novelist Norman Mailer joked: “In another couple of years there will be people arguing in bars about whether anyone even went to the Moon.” He was right. Claims that the moon landing were fake arose the moment the Saturn Vs stopped flying in 1972, and no wonder. In a deep and tragic sense Apollo was fake, in the sense that it didn’t deliver the world it had promised.

And let’s be clear here: the world it promised would have been wonderful. Never mind the technology: that was never the core point. What really mattered was that at the height of the Vietnam war, we seemed at last to have found that wonderful moral substitute for war. “All of the universe doesn’t care if we exist or not,” Ray Bradbury wrote, “but we care if we exist… This is the proper war to fight.”

Why has space exploration not united the world around itself? It’s easy to blame ourselves and our lack of vision. “It’s unfortunate,” Lyndon Johnson once remarked to the astronaut Wally Schirra, “but the way the American people are, now that they have developed all of this capability, instead of taking advantage of it, they’ll probably just piss it all away…” This is the mordant lesson of Stone and Andres’s otherwise uplifting Chasing the Moon.

Oliver Morton’s The Moon suggests a darker possibility: that the fault lies with the Moon itself, and, by implication, with everything that lies beyond our little home.

Morton’s Moon is a place defined by absences, gaps, and silence. He makes a poetry of it, for a while, he toys with thoughts of future settlement, he explores the commercial possibilities. In the end, though, what can this uneventful satellite of ours ever possibly be, but what it is: “just dry rocks jumbled”?

 

 

A world that has run out of normal

Reading The Uninhabitable Earth: A Story of the Future by David Wallace-Wells for the Telegraph, 16 February 2019

As global temperatures rise, and the mean sea-level with them, I have been tracing the likely flood levels of the Thames Valley, to see which of my literary rivals will disappear beneath the waves first. I live on a hill, and what I’d like to say is: you’ll be stuck with me a while longer than most. But on the day I had set aside to consume David Wallace-Wells’s terrifying account of climate change and the future of our species (there isn’t one), the water supply to my block was unaccountably cut off.

Failing to make a cup of tea reminded me, with some force, of what ought to be obvious: that my hill is a post-apocalyptic death-trap. I might escape the floods, but without clean water, food or power, I’ll be lucky to last a week.

The first half of The Uninhabitable Earth is organised in chapters that deal separately with famines, floods, fires, droughts, brackish oceans, toxic winds and war and all the other manifest effects of anthropogenic climate change (there are many more than four horsemen in this Apocalypse). At the same time, the author reveals, paragraph by paragraph, how these ever-more-frequent disasters join up in horrific cascades, all of which erode human trust to the point where civic life collapses.

The human consequences of climate disaster are going to be ugly. When a million refugees from the Syrian civil war started arriving in Europe in 2017, far-right parties entered mainstream political discourse for the first time in decades. By 2050, the United Nations predicts that Europe will host 200 million refugees. So buckle up. The disgust response with which we greet strangers on our own land is something we conscientiously suppress these days. But it’s still there: an evolved response that in less sanitary times got us through more than one plague.

That such truths go largely unspoken says something about the cognitive dissonance in which our culture is steeped. We just don’t have the mental tools to hold climate change in our heads. Amitav Ghosh made this clear enough in The Great Derangement (2016), which explains why the traditional novel is so hopeless at handling a world that has run out of normal, forgotten how to repeat itself, and will never be any sort of normal again.

Writers, seeking to capture the contemporary moment, resort to science fiction. But the secret, sick appeal of post-apocalyptic narratives, from Richard Jefferies’s After London on, is that in order to be stories at all their heroes must survive. You can only push nihilism so far. J G Ballard couldn’t escape that bind. Neither could Cormac McCarthy. Despite our most conscientious attempts at utter bloody bleakness, the human spirit persists.

Wallace-Wells admits as much. When he thinks of his own children’s future, denizens of a world plunging ever deeper into its sixth major extinction event, he admits that despair melts and his heart fills with excitement. Humans will cling to life on this ever less habitable earth for as long as they can. Quite right, too.

Wallace-Wells is deputy editor of New York magazine. In July 2017 he wrote a cover story outlining worst-case scenarios for climate change. His pessimism proved salutary: The Uninhabitable Earth has been much anticipated.

In the first half of the book the author channels former US vice-president Al Gore, delivering a blizzard of terrifying facts, and knocking socks off his predecessor’s An Inconvenient Truth (2006) not thanks to his native gifts (considerable as they are) but because the climate has deteriorated since then to the point where its declines can now be observed directly, and measured over the course of a human lifetime.

More than half the extra carbon dioxide released into the atmosphere by burning fossil fuels has been added in the past 30 years. This means that “we have done as much damage to the fate of the planet and its ability to sustain human life and civilization since Al Gore published his first book on climate than in all the centuries – all the millennia – that came before.” (4) Oceans are carrying at least 15 per cent more heat energy than they did in 2000. 22 per cent of the earth’s landmass was altered by humans just between 1992 and 2015. In Sweden, in 2018, forests in the Arctic Circle went up in flames. On and on like this. Don’t shoot the messenger, but “we have now engineered as much ruin knowingly as we ever managed in ignorance.”

The trouble is not that the future is bleak. It’s that there is no future. We’re running out of soil. In the United States, it’s eroding ten times faster than it is being replaced. In China and India, soil is disappearing thirty to forty times as fast. Wars over fresh water have already begun. The CO2 in the atmosphere has reduced the nutrient value of plants by about thirty per cent since the 1950s. Within the lifetimes of our children, the hajj will no longer be a feature of Islamic practice: the heat in Mecca will be such that walking seven times counterclockwise around the Kaaba will kill you.

This book may come to be regarded as last truly great climate assessment ever made. (Is there even time left to pen another?) Some of the phrasing will give persnickety climate watchers conniptions. (Words like “eventually” will be a red rag for them, because they catalyse the reader’s imagination without actually meaning anything.) But the research is extensive and solid, the vision compelling and eminently defensible.

Alas, The Uninhabitable Earth is also likely to be one of the least-often finished books of the year. I’m not criticising the prose, which is always clear and engaging and often dazzling. But It’s simply that the more we are bombarded with facts, the less we take in. Treating the reader like an empty bucket into which facts may be poured does not work very well, and even less well when people are afraid of what you are telling them. “If you have made it this far, you are a brave reader,” Wallace Wells writes on page 138. Many will give up long before then. Climate scientists have learned the hard way how difficult it is to turn fact into public engagement.

The second half of The Uninhabitable Earth asks why our being made aware of climate disaster doesn’t lead to enough reasonable action being taken against it. There’s a nuanced mathematical account to be written of how populations reach carrying capacity, run out of resources, and collapse; and an even more difficult book that will explain why we ever thought human intelligence would be powerful enough to elude this stark physical reality.

The final chapters of The Uninhabitable Earth provide neither, but neither are they narrowly partisan. Wallace-Wells mostly resists the temptation to blame the mathematical inevitability of our species’ growth and decline on human greed. The worst he finds to say about the markets and market capitalism – our usual stock villains – is not that they are evil, or psychopathic (or certainly no more evil or psychopathic than the other political experiments we’ve run in the past 150 years) but that they are not nearly as clever as we had hoped they might be. There is a twisted magnificence in the way we are exploiting, rather than adapting to the End Times. (Whole Foods in the US, we are told, is now selling “GMO-free” fizzy water.)

The Paris accords of 2016 established keeping warming to just two degrees as a global goal. Only a few years ago we were hoping for a rise of just 1.5 degrees. What’s the difference? According to the IPCC, that half-degree concession spells death for about 150 million people. Without significantly improved pledges, however, the IPCC reckons that instituting the Paris accords overnight (and no-one has) will still see us topping 3.2 degrees of warming. At this point the Antarctic’s ice sheets will collapse, drowning Miami, Dhaka, Shanghai, Hong Kong and a hundred other cities around the world. (Not my hill, though.)

And to be clear: this isn’t what could happen. This is what is already guaranteed to happen. Greenhouse gases work on too long a timescale to avoid it. “You might hope to simply reverse climate change;” writes Wallace-Wells: “you can’t. It will outrun all of us.”

“How widespread alarm will shape our ethical impulses toward one another, and the politics that emerge from those impulses,” says Wallace-Wells,”is among the more profound questions being posed by the climate to the planet of people it envelopes.”

My bet is the question will never tip into public consciousness: that, on the contrary, we’ll find ways, through tribalism, craft and mischief, to engineer what Wallace-Wells dubs “new forms of indifference”, normalising climate suffering, and exploiting novel opportunities, even as we live and more often die through times that will never be normal again.