“If we’re going to die, at least give us some tits”

The Swedes are besieging the city of Brno. A bit of Googling reveals the year to be 1645. Armed with pick and shovel, the travelling entertainer Tyll Ulenspiegel is trying to undermine the Swedish redoubts when the shaft collapses, plunging him and his fellow miners into utter darkness. It’s difficult to establish even who is still alive and who is dead. “Say something about arses,” someone begs the darkness. “Say something about tits. If we’re going to die, at least give us some tits…”

Reading Daniel Kehlmann’s Tyll for the Times, 25 January 2020


Cutting up the sky

Reading A Scheme of Heaven: Astrology and the Birth of Science by Alexander Boxer
for the Spectator, 18 January 2020

Look up at sky on a clear night. This is not an astrological game. (Indeed, the experiment’s more impressive if you don’t know one zodiacal pattern from another, and rely solely on your wits.) In a matter of seconds, you will find patterns among the stars.

We can pretty much apprehend up to five objects (pennies, points of light, what-have-you) at a single glance. Totting up more than five objects, however, takes work. It means looking for groups, lines, patterns, symmetries, boundaries.

The ancients cut up the sky into figures, all those aeons ago, for the same reason we each cut up the sky within moments of gazing at it: because if we didn’t, we wouldn’t be able to comprehend the sky at all.

Our pattern-finding ability can get out of hand. During his Nobel lecture in 1973 the zoologist Konrad Lorenz recalled how he once :”… mistook a mill for a sternwheel steamer. A vessel was anchored on the banks of the Danube near Budapest. It had a little smoking funnel and at its stern an enormous slowly-turning paddle-wheel.”

Some false patterns persist. Some even flourish. And the brighter and more intellectually ambitious you are, the likelier you are to be suckered. John Dee, Queen Elizabeth’s court philosopher, owned the country’s largest library (it dwarfed any you would find at Oxford or Cambridge). His attempt to tie up all that knowledge in a single divine system drove him into the arms of angels — or at any rate, into the arms of the “scrier” Edward Kelley, whose prodigious output of symbolic tables of course could be read in such a way as to reveal fragments of esoteric wisdom.

This, I suspect, is what most of us think about astrology: that it was a fanciful misconception about the world that flourished in times of widespread superstition and ignorance, and did not, could not, survive advances in mathematics and science.

Alexander Boxer is out to show how wrong that picture is, and A Scheme of Heaven will make you fall in love with astrology, even as it extinguishes any niggling suspicion that it might actually work.

Boxer, a physicist and historian, kindles our admiration for the earliest astronomers. My favourite among his many jaw-dropping stories is the discovery of the precession of the equinoxes. This is the process by which the sun, each mid-spring and mid-autumn, rises at a fractionally different spot in the sky each year. It takes 26,000 years to make a full revolution of the zodiac — a tiny motion first detected by Hipparchus around 130 BC. And of course Hipparchus, to make this observation at all, “had to rely on the accuracy of stargazers who would have seemed ancient even to him.”

In short, a had a library card. And we know that such libraries existed because the “astronomical diaries” from the Assyrian library at Nineveh stretch from 652BC to 61BC, representing possibly the longest continuous research program ever undertaken in human history.

Which makes astrology not too shoddy, in my humble estimation. Boxer goes much further, dubbing it “the ancient world’s most ambitious applied mathematics problem.”

For as long as lives depend on the growth cycles of plants, the stars will, in a very general sense, dictate the destiny of our species. How far can we push this idea before it tips into absurdity? The answer is not immediately obvious, since pretty much any scheme we dream up will fit some conjunction or arrangement of the skies.

As civilisations become richer and more various, the number and variety of historical events increases, as does the chance that some event will coincide with some planetary conjunction. Around the year 1400, the French Catholic cardinal Pierre D’Ailly concluded his astrological history of the world with a warning that the Antichrist could be expected to arrive in the year 1789, which of course turned out to be the year of the French revolution.

But with every spooky correlation comes an even larger horde of absurdities and fatuities. Today, using a machine-learning algorithm, Boxer shows that “it’s possible to devise a model that perfectlly mimics Bitcoin’s price history and that takes, as its input data, nothing more than the zodiac signs of the planets on any given day.”

The Polish science fiction writer Stanislaw Lem explored this territory in his novel The Chain of Chance: “We now live in such a dense world of random chance,” he wrote in 1975, “in a molecular and chaotic gas whose ‘improbabilities’ are amazing only to the individual human atoms.” And this, I suppose, is why astrology eventually abandoned the business of describing whole cultures and nations (a task now handed over to economics, another largely ineffectual big-number narrative) and now, in its twilight, serves merely to gull individuals.

Astrology, to work at all, must assume that human affairs are predestined. It cannot, in the long run, survive the notion of free will. Christianity did for astrology, not because it defeated a superstition, but because it rendered moot astrology’s iron bonds of logic.

“Today,” writes Boxer, “there’s no need to root and rummage for incidental correlations. Modern machine-learning algorithms are correlation monsters. They can make pretty much any signal correlate with any other.”

We are bewitched by big data, and imagine it is something new. We are ever-indulgent towards economists who cannot even spot a global crash. We credulously conform to every algorithmically justified norm. Are we as credulous, then, as those who once took astrological advice as seriously as a medical diagnosis? Oh, for sure.

At least our forebears could say they were having to feel their way in the dark. The statistical tools you need to sort real correlations from pretty patterns weren’t developed until the late nineteenth century. What’s our excuse?

“Those of us who are enthusiastic about the promise of numerical data to unlock the secrets of ourselves and our world,” Boxer writes, “would do well simply to acknowledge that others have come this way before.”

‘God knows what the Chymists mean by it’

Reading Antimony, Gold, and Jupiter’s Wolf: How the Elements Were Named, by
Peter Wothers, for The Spectator, 14 December 2019

Here’s how the element antimony got its name. Once upon a time (according to the 17th-century apothecary Pierre Pomet), a German monk (moine in French) noticed its purgative effects in animals. Fancying himself as a physician, he fed it to “his own Fraternity… but his Experiment succeeded so ill that every one who took of it died. This therefore was the reason of this Mineral being call’d Antimony, as being destructive of the Monks.”

If this sounds far-fetched, the Cambridge chemist Peter Wothers has other stories for you to choose from, each more outlandish than the last. Keep up: we have 93 more elements to get through, and they’re just the ones that occur naturally on Earth. They each have a history, a reputation and in some cases a folklore. To investigate their names is to evoke histories that are only intermittently scientific. A lot of this enchanting, eccentric book is about mining and piss.

The mining:

There was no reliable lighting or ventilation; the mines could collapse at any point and crush the miners; they could be poisoned by invisible vapours or blown up by the ignition of pockets of flammable gas. Add to this the stifling heat and the fact that some of the minerals themselves were poisonous and corrosive, and it really must have seemed to the miners that they were venturing into hell.

Above ground, there were other difficulties. How to spot the new stuff? What to make of it? How to distinguish it from all the other stuff? It was a job that drove men spare. In a 1657 Physical Dictionary the entry for Sulphur Philosophorum states simply: ‘God knows what the Chymists mean by it.’

Today we manufacture elements, albeit briefly, in the lab. It’s a tidy process, with a tidy nomenclature. Copernicum, einsteinium berkelium: neologisms as orderly and unevocative as car marques.

The more familiar elements have names that evoke their history. Cobalt, found in
a mineral that used to burn and poison miners, is named for the imps that, according to the 16th-century German Georgius Agricola ‘idle about in the shafts and tunnels and really do nothing, although they pretend to be busy in all kinds of labour’. Nickel is kupfernickel, ‘the devil’s copper’, an ore that looked like valuable copper ore but, once hauled above the ground, appeared to have no value whatsoever.

In this account, technology leads and science follows. If you want to understand what oxygen is, for example, you first have to be able to make it. And Cornelius Drebbel, the maverick Dutch inventor, did make it, in 1620, 150 years before Joseph Priestley got in on the act. Drebbel had no idea what this enchanted stuff was, but he knew it sweetened the air in his submarine, which he demonstrated on the Thames before King James I. Again, if you want a good scientific understanding of alkalis, say, then you need soap, and lye so caustic that when a drunk toppled into a pit of the stuff ‘nothing of him was found but his Linnen Shirt, and the hardest Bones, as I had the Relation from a Credible Person, Professor of that Trade’. (This is Otto Tachenius, writing in 1677. There is lot of this sort of thing. Overwhelming in its detail as it can be, Antimony, Gold, and Jupiter’s Wolf is wickedly entertaining.)

Wothers does not care to hold the reader’s hand. From page 1 he’s getting his hands dirty with minerals and earths, metals and the aforementioned urine (without which the alchemists, wanting chloride, sodium, potassium and ammonia, would have been at a complete loss) and we have to wait till page 83 for a discussion of how the modern conception of elements was arrived at. The periodic table doesn’t arrive till page 201 (and then it’s Mendeleev’s first table, published in 1869). Henri Becquerel discovers radioactivity barely four pages before the end of the book. It’s a surprising strategy, and a successful one. Readers fall under the spell of the possibilities of matter well before they’re asked to wrangle with any of the more highfalutin chemical concepts.

In 1782, Louis-Bernard Guyton de Morveau published his Memoir upon Chemical Denominations, the Necessity of Improving the System, and the Rules for Attaining a Perfect Language. Countless idiosyncracies survived his reforms. But chemistry did begin to acquire an orderliness that made Mendeleev’s towering work a century later — relating elements to their atomic structure — a deal easier.

This story has an end. Chemistry as a discipline is now complete. All the major problems have been solved. There are no more great discoveries to be made. Every chemical reaction we do is another example of one we’ve already done. These days, chemists are technologists: they study spectrographs, and argue with astronomers about the composition of the atmospheres around planets orbiting distant stars; they tinker in biophysics labs, and have things to say about protein synthesis. The heroic era of chemical discovery — in which we may fondly recall Gottfried Leibniz extracting phosphorus from 13,140 litres of soldiers’ urine — is past. Only some evocative words remain; and Wothers unpacks them with infectious enthusiasm, and something which in certain lights looks very like love.

Russian enlightenment

Attending Russia’s top non-fiction awards for the TLS, 11 December 2019

Founded in 2008, the Enlightener awards are modest by Western standards. The Russian prize is awarded to writers of non-fiction, and each winner receives seven million rubles – just over £8,500. This year’s ceremony took place last month at Moscow’s School Of Modern Drama, and its winners included Pyotr Talantov for his book exploring the distinction between modern medicine and its magical antecedents, and Elena Osokina for a work about the state stores that sold food and goods at inflated prices in exchange for foreign currency, gold, silver and diamonds. But the organizer’s efforts also extend to domestic and foreign lecture programmes, festivals and competitions. And at this year’s ceremony a crew from TV Rain (or Dozhd, an independent channel) was present, as journalists and critics mingled with researchers in medicine and physics, who had come to show support for the Zimin Foundation which is behind the prizes.

The Zimin Foundation is one of those young–old organizations whose complex origin story reflects the Russian state’s relationship with its intelligentsia. It sprang up to replace the celebrated and influential Dynasty Foundation, whose work was stymied by legal controversy in 2015. Dynasty had been paying stipends to young biologists, physicists and mathematicians: sums just enough that jobbing scientists could afford Moscow rents. The scale of the effort grabbed headlines. Its plan for 2015 – the year it fell foul of the Russian government – was going to cost it 435 million rubles: around £5.5 million.

The Foundation’s money came from Dimitry Zimin’s sale, in 2001, of his controlling stake in VimpelCom, Russia’s second-largest telecoms company.  Raised on non-fiction and popular science, Zimin (pictured) decided to use the money to support young researchers. (“It would be misleading to claim that I’m driven by some noble desire to educate humankind”, he remarked in a 2013 interview. “It’s just that I find it exciting.”)

As a child, Zimin had sought escape in the Utopian promises of science. And no wonder: when he was two, his father was killed in a prison camp near Novosibirsk. A paternal uncle was shot three years later, in 1938. He remembers his mother arguing for days with neighbours in their communal apartment about who was going to wash the floors, or where to store luggage. It was so crowded that when his mother remarried, Dmitry barely noticed. In 1947, Eric Ashby, the Australian Scientific Attaché to the USSR, claimed “it can be said without fear of contradiction that nowhere else in the world, not even in America, is there such a widespread interest in science among the common people”. “Science is kept before the people through newspapers, books, lectures, films, exhibitions in parks and museums, and through frequent public festivals in honour of scientists and their discoveries. There is even an annual ‘olympiad’ of physics for Moscow schoolchildren.” Dimitry Zimin was firmly of this generation.

Then there were books, the “Scientific Imaginative Literature” whose authors had a section all of their own at the Praesidium of the Union of Soviet Writers. Romances about radio. Thrillers about industrial espionage. Stirring adventure stories about hydrographic survey missions to the arctic. The best of these science writers won lasting reputations in the West. In 1921 Alexander Oparin had the bold new idea that life resulted from non-living processes; The Origin of Life came out in English translation in New York in 1938. Alexander Luria’s classic neuropsychological case study The Mind of a Mnemonist described the strange world of a client of his, Solomon Shereshevsky, a man with a memory so prodigious it ruined his life. An English translation first appeared in 1960 and is still in print.

By 2013 Zimin, at the age of eighty, was established as one of the world’s foremost philanthropists, a Carnegie Trust medalist like Rockefeller and the Gateses, George Soros and Michael Bloomberg. But that is a problem in a country where the leaders fear successful businesspeople. In May 2015, just two months after Russia’s minister of education and science, Dmitry Livanov, presented Zimin with a state award for services to science, the Dynasty Foundation was declared a “foreign agent”. “So-called foreign funds work in schools, networks move about schools in Russia for many years under the cover of supporting talented youth”, complained Vladimir Putin, in a speech in June 2015. “Actually they are just sucking them up like a vacuum cleaner.” Never mind that Dynasty’s whole point was to encourage homegrown talent to return. (According to the Association of Russian-Speaking Scientists, around 100,000 Russian-speaking researchers work outside the country.)

Dynasty was required to put a label on their publications and other materials to the effect that they received foreign funding. To lie, in other words. “Certainly, I will not spend my own money acting under the trademark of some unknown foreign state”, Zimin told the news agency Interfax on May 26. “I will stop funding Dynasty.” But instead of stopping his funding altogether, Zimin founded a new foundation, which took over Dynasty’s programmes, including the Enlighteners. Constituted to operate internationally, it is a different sort of beast. It does not limit itself to Russia. And on the Monday following this year’s Enlightener awards it announced a plan to establish new university laboratories around the world. The foundation already has scientific projects up and running in New York, Tel Aviv and Cyprus, and cultural projects at Tartu University in Estonia and in London, where it supports Polity Press’s Russian translation programme.

In Russia, meanwhile, history continues to repeat itself.  In July 2019 the Science and Education Ministry sent a list of what it later called “recommendations” to the institutions it controls. The ministry should be notified in detail of any planned meetings with foreigners and provide the names. At least two Russian researchers must be present at any meeting with foreigners. Contact with foreigners outside work hours is only allowed with a supervisor’s permission. Details of any after-hours contact must be summarized, along with copies of the participants’ passports. This doesn’t just echo the Soviet limits on international communication. It copies them, point by point.

In Soviet times, of course, many scientists and engineers lived in golden cages, enjoying unprecedented social status. But with the Soviet collapse in 1991 came a readjustment in political values that handed the industrial sector to speculators, while leaving experts and technicians without tenure, without prospects; above all, without salaries.

The wheel will keep turning, of course. In 2018 Putin promised that science and innovation were now his top priorities. And things are improving: research and development now receives 1 per cent of the country’s GDP. But Russia has a long way to go to recover its scientific standing, and science does poorly in a politically isolated country. The Enlighteners – Russia’s only major award for non-fiction – are as much an attempt to create a civic space for science as they are a celebration of a genre that has powered Russian dreaming for over a hundred years.

Tyrants and geometers

Reading Proof!: How the World Became Geometrical by Amir Alexander (Scientific American) for the Telegraph, 7 November 2019

The fall from grace of Nicolas Fouquet, Louis XIV’s superintendant of finances, was spectacular and swift. In 1661 he held a fete to welcome the king to his gardens at Vaux-le-Vicomte. The affair was meant to flatter, but its sumptuousness only served to convince the absolutist monarch that Fouquet was angling for power. “On 17 August, at six in the evening Fouquet was the King of France,” Voltaire observed; “at two in the morning he was nobody.”

Soon afterwards, Fouquet’s gardens were grubbed up in an act, not of vandalism, but of expropriation: “The king’s men carefully packed the objects into crates and hauled them away to a marshy town where Louis was intent on building his own dream palace,” the Israeli-born US historian Amir Alexander tells us. “It was called Versailles.”

Proof! explains how French formal gardens reflected, maintained and even disseminated the political ideologies of French monarchs. from “the Affable” Charles VIII in the 15th century to poor doomed Louis XVI, destined for the guillotine in 1793. Alexander claims these gardens were the concrete and eloquent expression of the idea that “geometry was everywhere and structured everything — from physical nature to human society, the state, and the world.”

If you think geometrical figures are abstract artefacts of the human mind, think again. Their regularities turn up in the natural world time and again, leading classical thinkers to hope that “underlying the boisterous chaos and variety that we see around us there may yet be a rational order, which humans can comprehend and even imitate.”

It is hard for us now to read celebrations of nature into the rigid designs of 16th century Fontainebleau or the Tuileries, but we have no problem reading them as expressions of political power. Geometers are a tyrant’s natural darlings. Euclid spent many a happy year in Ptolemaic Egypt. King Hiero II of Syracuse looked out for Archimedes. Geometers were ideologically useful figures, since the truths they uncovered were static and hierarchical. In the Republic, Plato extols the virtues of geometry and advocates for rigid class politics in practically the same breath.

It is not entirely clear, however, how effective these patterns actually were as political symbols. Even as Thomas Hobbes was modishly emulating the logical structure of Euclid’s (geometrical) Elements in the composition of his (political) Leviathan (demonstrating, from first principles, the need for monarchy), the Duc de Saint-Simon, a courtier and diarist, was having a thoroughly miserable time of it in the gardens of Louis XIV’s Versailles: “the violence everywhere done to nature repels and wearies us despite ourselves,” he wrote in his diary.

So not everyone was convinced that Versailles, and gardens of that ilk, revealed the inner secrets of nature.

Of the strictures of classical architecture and design, Alexander comments that today, “these prescriptions seem entirely arbitrary”. I’m not sure that’s right. Classical art and architecture is beautiful, not merely for its antiquity, but for the provoking way it toys with the mechanics of visual perception. The golden mean isn’t “arbitrary”.

It was fetishized, though: Alexander’s dead right about that. For centuries, Versailles was the ideal to which Europe’s grand urban projects aspired, and colonial new-builds could and did out-do Versailles, at least in scale. Of the work of Lutyens and Baker in their plans for the creation of New Delhi, Alexander writes: “The rigid triangles, hexagons, and octagons created a fixed, unalterable and permanent order that could not be tampered with.”

He’s setting colonialist Europe up for a fall: that much is obvious. Even as New Delhi and Saigon’s Boulevard Norodom and all the rest were being erected, back in Europe mathematicians Janos Bolyai, Carl Friedrich Gauss and Bernhard Riemann were uncovering new kinds of geometry to describe any curved surface, and higher dimensions of any order. Suddenly the rigid, hierarchical order of the Euclidean universe was just one system among many, and Versailles and its forerunners went from being diagrams of cosmic order to being grand days out with the kids.

Well, Alexander needs an ending, and this is as good a place as any to conclude his entertaining, enlightening, and admirably well-focused introduction to a field of study that, quite frankly, is more rabbit-hole than grass.

I was in Washington the other day, sweating my way up to the Lincoln Memorial. From the top I measured the distance, past the needle of the Washington Monument, to Capitol Hill. Major Pierre Charles L’Enfant built all this: it’s a quintessential product of the Versailles tradition. Alexander calls it “nothing less than the Constitutional power structure of the United States set in stone, pavement, trees, and shrubs.”

For nigh-on 250 years tourists have been slogging from one end of the National Mall to the other, re-enacting the passion of the poor Duc de Saint-Simon in Versailles, who complained that “you are introduced to the freshness of the shade only by a vast torrid zone, at the end of which there is nothing for you but to mount or descend.”

Not any more, though. Skipping down the steps, I boarded a bright red electric Uber scooter and sailed electrically east toward Capitol Hill. The whole dignity-dissolving charade was made possible (and cheap) by map-making algorithms performing geometrical calculations that Euclid himself would have recognised. Because the ancient geometer’s influence on our streets and buildings hasn’t really vanished. It’s been virtualised. Algorithmized. Turned into a utility.

Now geometry’s back where it started: just one more invisible natural good.


Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.

Normal fish and stubby dinosaurs

Reading Imagined Life by James Trefil and Michael Summers for New Scientist, 20 September 2019

If you can imagine a world that is consistent with the laws of physics,” say physicist James Trefil and planetary scientist Michael Summers, “then there’s a good chance that it exists somewhere in our galaxy.”

The universe is dark, empty, and expanding, true. But the few parts of it that are populated by matter at all, are full of planets. Embarrassingly so: interstellar space itself is littered with hard-to-spot rogue worlds, ejected early on in their solar system’s history, and these worlds may outnumber orbiting planets by a factor of two to one. (Not everyone agrees: some experts reckon rogues may out-number orbital worlds 1000 to one. One of the reasons the little green men have yet to sail up to the White House, is that they keep hitting space shoals.)

Can we conclude, then, that this cluttered galaxy is full of life? The surprising (and frustrating) truth is that we genuinely have no idea. And while Trefil and Summers are obviously primed to receive with open arms any visitors who happen by, they do a splendid job, in this, their second slim volume together of explaining just how tentative and speculative our thoughts about exobiology actually are, and why.

Exoplanets came out in 2013; Imagined Life is a sort of sequel and is, if possible, even more accessible. In just 14 pages, the authors outline the physical laws constraining the universe. Then they rattle through the various ways we can define life, and why spotting life on distant worlds is so difficult (“For just about every molecule that we could identify [through spectroscopy] as a potential biomarker of life on an exoplanet, there is a nonbiological production mechanism.”). They list the most likely types of environment on which life may have evolved, from water worlds to Mega Earths (expect “normal fish… and stubby dinosaurs”), from tidally locked planets to wildly exotic (but by no means unlikely) superconducting rogues. And we haven’t even reached the meat of this tiny book yet – a tour, planet by imaginary planet, of the possibilities for life, intelligence, and civilisation in our and other galaxies.

Most strange worlds are far too strange for life, and the more one learns about chemistry, the more sober one’s speculations become. Water is common in the universe, and carbon not hard to find, and this is as well, given the relative uselessness of their nearest equivalents (benzene and silicon, say). The authors argue enthusiastically for the possibilities of life that’s “really not like us”, but they have a hard time making it stick. Carbon-based life is pretty various, of course, but even here there may be unexepected limits on what’s possible. Given that, out of 140 amino acids, only 22 have been recruited in nature, it may be that mechanisms of inheritance converge on a surprisingly narrow set of possibilities.

The trick to finding life in odd places, we discover, is to look not out, but in, and through. “Scientists are beginning to abandon the idea that life has to evolve and persist on the surface of planets” the authors write, laying the groundwork for their description of an aquatic alien civilisation for whom a mission to the ocean surface “would be no stranger to them than a mission to Mars is to us.”

I’m not sure I buy the authors’ stock assumption that life most likely breeds intelligence most likely breeds technology. Nothing in biology , or human history, suggests as much. Humans in their current iteration may be far odder than we imagine. But what the hell: Imagined Life reminds me of those books I grew up with, full of artists’ impressions of the teeming oceans of Venus. Only now, the science is better; the writing is better; and the possibiliities, being more focused, are altogether more intoxicating.

The weather forecast: a triumph hiding in plain sight

Reading The Weather Machine by Andrew Blum (Bodley Head) for the Telegraph, 6 July 2019

Reading New York journalist Andrew Blum’s new book has cured me of a foppish and annoying habit. I no longer dangle an umbrella off my arm on sunny days, tripping up my fellow commuters before (inevitably) mislaying the bloody thing on the train to Coulsdon Town. Very late, and to my considerable embarrassment, I have discovered just how reliable the weather forecast is.

My thoroughly English prejudice against the dark art of weather prediction was already set by the time the European Centre for Medium-Range Weather Forecasts opened in Reading in 1979. Then the ECMWF claimed to be able to see three days into the future. Six years later, it could see five days ahead. It knew about Sandy, the deadliest hurricane of 2012, eight days ahead, and it expects to predict high-impact events a fortnight before they happen by the year 2025.

The ECMWF is a world leader, but it’s not an outlier. Look at the figures: weather forecasts have been getting consistently better for 40 straight years. Blum reckons this makes the current global complex of machines, systems, networks and acronyms (and there are lots of acronyms) “a high point of science and technology’s aspirations for society”.

He knows this is a minority view: “The weather machine is a wonder we treat as a banality,” he writes: “a tool that we haven’t yet learned to trust.” The Weather Machine is his attempt to convey the technical brilliance and political significance of an achievement that hides in plain sight.

The machine’s complexity alone is off all familiar charts, and sets Blum significant challenge. “As a rocket scientist at the Jet Propulsion Laboratory put it to me… landing a spacecraft on Mars requires dealing with hundreds of variables,” he writes; “making a global atmospheric model requires hundreds of thousands.” Blum does an excellent job of describing how meteorological theory and observation were first stitched together, and why even today their relationship is a stormy one.

His story opens in heroic times, with Robert FitzRoy one of his more engaging heroes. Fitzroy is best remembered for captaining the HMS Beagle and weathering the puppyish enthusiasm of a young Charles Darwin. But his real claim to fame is as a meteorologist. He dreamt up the term “forecast”, turned observations into predictions that saved sailors’ lives, and foresaw with clarity what a new generation of naval observers would look like. Distributed in space and capable of communicating instantaneously with each other, they would be “as if an eye in space looked down on the whole North Atlantic”.

You can’t produce an accurate forecast from observation alone, however. You also need a theory of how the weather works. The Norwegian physicist Vilhelm Bjerknes came up with the first mathematical model of the weather: a set of seven interlinked partial differential equations that handled the fact that the atmosphere is a far from ideal fluid. Sadly, Bjerknes’ model couldn’t yet predict anything — as he himself said, solutions to his equations “far exceed the means of today’s mathematical analysis”. As we see our models of the weather evolve, so we see works of individual genius replaced by systems of machine computation. In the observational realm, something similar happens: the heroic efforts of individual observers throw up trickles of insight that are soon subsumed in the torrent of data streaming from the orbiting artefacts of corporate and state engineering.

The American philosopher Timothy Morton dreamt up the term “hyperobject” to describe things that are too complex and numinous to describe in the plain terms. Blum, whose earlier book was Tubes: Behind the Scenes at the Internet (2012), fancies his chances at explaining human-built hyperobjects in solid, clear terms, without recourse to metaphor and poesy. In this book, for example, he recognises the close affinity of military and meteorological infrastructures (the staple of many a modish book on the surveillance state), but resists any suggestion that they are the same system.

His sobriety is impressive, given how easy it is to get drunk on this stuff. In October 1946, technicians at the White Sands Proving Ground in Nevada installed a camera in the nose cone of a captured V2, and by launching it, yielded photographs of a quarter of the US — nearly a million square miles banded by clouds “stretching hundreds of miles in rows like streets”. This wasn’t the first time a bit of weather kit acted as an expendable test in a programme of weapons development, and it certainly wasn’t the last. Today’s global weather system has not only benefited from military advancements in satellite positioning and remote sensing; it has made those systems possible. Blum allows that “we learned to see the whole earth thanks to the technology built to destroy the whole earth”. But he avoids paranoia.

Indeed, he is much more impressed by the way countries at hammer and tongs with each other on the political stage nevertheless collaborated closely and well on a global weather infrastructure. Point four of John F Kennedy’s famous 1961 speech on “Urgent National Needs” called for “a satellite system for worldwide weather observation”, and it wasn’t just militarily useful American satellites he had in mind for the task: in 1962 Harry Wexler of the U.S. Weather Bureau worked with his Soviet counterpart Viktor Bugaev on a report proposing a “World Weather Watch”, and by 1963 there was, Blum finds, “a conscious effort by scientists — on both sides of the Iron Curtain, in all corners of the earth — to design an integrated and coordinated apparatus” — this at a time when weather satellites were so expensive they could be justified only on national security grounds.

Blum’s book comes a little bit unstuck at the end. A final chapter that could easily have filled a third of the book is compressed into just a few pages’ handwaving and special pleading, as he conjures up a vision of a future in which the free and global nature of weather information has ceased to be a given and the weather machine, that “last bastion of international cooperation”, has become just one more atomised ghost of a future the colonial era once promised us.

Why end on such a minatory note? The answer, which is by no means obvious, is to be found in Reading. Today 22 nations pay for the ECMWF’s maintenance of a pair of Cray supercomputers. The fastest in the world, these machines must be upgraded every two years. In the US, meanwhile, weather observations rely primarily on the health of four geostationary satellites, at a cost of 11 billion dollars. (America’s whole National Weather Service budget costs only around $1billion.)

Blum leaves open the question, How is an organisation built by nation-states, committed to open data and borne of a global view, supposed to work in a world where information lives on private platforms and travels across private networks — a world in which billions of tiny temperature and barometric sensors, “in smartphones, home devices, attached to buildings, buses or airliners,” are aggregated by the likes of Google, IBM or Amazon?

One thing is disconcertingly clear: Blum’s weather machine, which in one sense is a marvel of continuing modernity, is also, truth be told, a dinosaur. It is ripe for disruption, of a sort that the world, grown so reliant on forecasting, could well do without.

All the ghosts in the machine

Reading All the Ghosts in the Machine: Illusions of immortality in the digital age by Elaine Kasket for New Scientist, 22 June 2019

Moving first-hand interviews and unnervingly honest recollections weave through psychologist Elaine Kasket’s first mainstream book, All the Ghosts in the Machine, an anatomy of mourning in the digital age. Unravelling that architecture turns up two distinct but complementary projects.

The first offers some support and practical guidance for people (and especially family members) who are blindsided by the practical and legal absurdities generated when people die in the flesh, while leaving their digital selves very much alive.

For some, the persistence of posthumous data, on Facebook, Instagram or some other corner of the social media landscape, is a source of “inestimable comfort”. For others, it brings “wracking emotional pain”. In neither case is it clear what actions are required, either to preserve, remove or manage that data. As a result, survivors usually oversee the profiles of the dead themselves – always assuming, of course, that they know their passwords. “In an effort to keep the profile ‘alive’ and to stay connected to their dead loved one,” Kasket writes, “a bereaved individual may essentially end up impersonating them.”

It used to be the family who had privileged access to the dead, to their personal effects, writings and photographs. Families are, as a consequence, disproportionately affected by the persistent failure of digital companies to distinguish between the dead and the living.

Who has control over a dead person’s legacy? What unspoken needs are being trammelled when their treasured photographs evaporate or, conversely, when their salacious post-divorce Tinder messages are disgorged? Can an individual’s digital legacy even be recognised for what it is in a medium that can’t distinguish between life and death?

Kasket’s other project is to explore this digital uncanny from a psychoanalytical perspective. Otherwise admirable 19th-century ideals of progress, hygiene and personal improvement have conned us into imagining that mourning is a more or less understood process of “letting go”. Kasket’s account of how this idea gained currency is a finely crafted comedy of intellectual errors.

In fact, grief doesn’t come in stages, and our relationships with the dead last far longer than we like to imagine. All the Ghosts in the Machine opens with an account of the author’s attempt to rehabilitate her grandmother’s bitchy reputation by posting her love letters on Instagram.

“I took a private correspondence that was not intended for me and transformed it from its original functions. I wanted it to challenge others’ ideas, and to affect their emotions… Ladies and gentlemen of today, I present to you the deep love my grandparents held for one another in 1945, ‘True romance’, heart emoticon.”

Eventually, Kasket realised that the version of her grandmother her post had created was no more truthful than the version that had existed before. And by then, of course, it was far too late.

The digital persistence of the dead is probably a good thing in these dissociated times. A culture of continuing bonds with the dead is much to be preferred over one in which we are all expected to “get over it”. But, as Kasket observes, there is much work to do, for “the digital age has made continuing bonds easier and harder all at the same time.”

“A wonderful moral substitute for war”

Reading Oliver Morton’s The Moon and Robert Stone and Alan Adres’s Chasing the Moon for The Telegraph, 18 May 2019

I have Arthur to thank for my earliest memory: being woken and carried into the living room on 20 July 1969 to see Neil Armstrong set foot on the moon.

Arthur is a satellite dish, part of the Goonhilly Earth Satellite Station in Cornwall. It carried the first ever transatlantic TV pictures from the USA to Europe. And now, in a fit of nostalgia, I am trying to build a cardboard model of the thing. The anniversary kit I bought comes with a credit-card sized Raspberry Pi computer that will cause a little red light to blink at the centre of the dish, every time the International Space Station flies overhead.

The geosychronous-satellite network that Arthur Clarke envisioned in 1945 came into being at the same time as men landed on the Moon. Intelsat III F-3 was moved into position over the Indian Ocean a few days before Apollo 11’s launch, completing the the world’s first geostationary-satellite network. The Space Race has bequeathed us a world steeped in fractured televisual reflections of itself.

Of Apollo itself, though, what actually remains? The Columbia capsule is touring the United States: it’s at Seattle’s Museum of Flight for this year’s fiftieth anniversary. And Apollo’s Mission Control Center in Houston is getting a makeover, its flight control consoles refurbished, its trash cans, book cases, ashtrays and orange polyester seat cushions all restored.

On the Moon there are some flags; some experiments, mostly expired; an abandoned car.

In space, where it matters, there’s nothing. The intention had been to build moon-going craft in orbit. This would have involved building a space station first. In the end, spooked by a spate of Soviet launches, NASA decided to cut to the chase, sending two small spacecraft up on a single rocket. One got three astronauts to the moon. The other, a tiny landing bug (standing room only) dropped two of them onto the lunar surface and puffed them back up into lunar orbit, where they rejoined the command module and headed home. It was an audacious, dangerous and triumphant mission — but it left nothing useful or reuseable behind.

In The Moon: A history for the future, science writer Oliver Morton observes that without that peculiar lunar orbital rendezvous plan, Apollo would at least have left some lasting infrastructure in orbit to pique someone’s ambition. As it was, “Every Apollo mission would be a single shot. Once they were over, it would be in terms of hardware — even, to a degree, in terms of expertise — as if they had never happened.”

Morton and I belong to the generation sometimes dubbed Apollo’s orphans. We grew up (rightly) dazzled by Apollo’s achievement. It left us, however, with the unshakable (and wrong) belief that our enthusiasm was common, something to do with what we were taught to call humanity’s “outward urge”. The refrain was constant: how in people there was this inborn desire to leave their familiar surroundings and explore strange new worlds.

Nonsense. Over a century elapsed between Columbus’s initial voyage and the first permanent English settlements. One of the more surprising findings of recent researches into the human genome is that, left to their own devices, people hardly move more than a few weeks’ walking distance from where they were born.

This urge, that felt so visceral, so essential to one’s idea of oneself: how could it possibly turn out to be the psychic artefact of a passing political moment?

Documentary makers Robert Stone and Alan Andres answer that particular question in Chasing the Moon, a tie in to their forthcoming series on PBS. It’s a comprehensive account of the Apollo project, and sends down deep roots: to the cosmist speculations of fin de siecle Russia, the individualist eccentricities of Germanys’ Verein fur Raumschiffart (Space Travel Society), and the deceptively chummy brilliance of the British Interplanetary Society, who used to meet in the pub.

The strength of Chasing the Moon lies not in any startling new information it divulges (that boat sailed long ago) but in the connections it makes, and the perspectives it brings to bear. It is surprising to find the New York Times declaring, shortly after the Bay of Pigs fiasco, that Kennedy isn’t nearly as interested in building a space programme as he should be. (“So far, apparently, no one has been able to persuade President Kennedy of the tremendous political, psychological, and prestige importance, entirely apart from the scientific and military results, of an impressive space achievement.”) And it is worthwhile to be reminded that, less than a month after his big announcement, Kennedy was trying to persuade Khrushchev to collaborate on the Apollo project, and that he approached the Soviets with the idea a second time, just days before his assassination in Dallas.

For Kennedy, Apollo was a strategic project, “a wonderful moral substitute for war ” (to slightly misapply Ray Bradbury’s phrase), and all to do with manned missions. NASA administrator James Webb, on the other hand, was a true believer. He could see no end to the good big organised government projects could achieve by way of education and science and civil development. In his modesty and dedication, Webb resembled no-one so much as the first tranche of bureaucrat-scientists in the Soviet Union. He never featured on a single magazine cover, and during his entire tenure he attended only one piloted launch from Cape Kennedy. (“I had a job to do in Washington,” he explained.)

The two men worked well enough together, their priorities dovetailing neatly in the role NASA took in promoting the Civil Rights Act and the Voting Rights Act and the government’s equal opportunities program. (NASA’s Saturn V designer, the former Nazi rocket scientist Wernher Von Braun, became an unlikely and very active campaigner, the New York Times naming him “one of the most outspoken spokesmen for racial moderation in the South.”) But progress was achingly slow.

At its height, the Apollo programme employed around two per cent of the US workforce and swallowed four per cent of its GDP. It was never going to be agile enough, or quotidian enough, to achieve much in the area of effecting political change. There were genuine attempts to recruit and train a black pilot for the astronaut programme. But comedian Dick Gregory had the measure of this effort: “A lot of people was happy that they had the first Negro astronaut, Well, I’ll be honest with you, not myself. I was kind of hoping we’d get a Negro airline pilot first.”

The big social change the Apollo program did usher in was television. (Did you know that failing to broadcast the colour transmissions from Apollo 11 proved so embarrassing to the apartheid government in South Africa that they afterwards created a national television service?)

But the moon has always been a darling of the film business. Never mind George Melie’s Trip to the Moon. How about Fritz Lang ordering a real rocket launch for the premiere of Frau im Mond? This was the film that followed Metropolis, and Lang roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre tall liquid-propellant rocket came to nought, Oberth set about building one eleven metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily they ran out of money.

The Verein für Raumschiffahrt was founded by men who had acted as scientific consultants on Frau im Mond. Von Braun became one of their number, before he was whisked away by the Nazis to build rockets for the war effort. Without Braun, the VfR grew nuttier by the year. Oberth, who worked for a time in the US after the war, went the same way, his whole conversation swallowed by UFOs and extraterrestrials and glimpses of Atlantis. When he went back to Germany, no-one was very sorry to see him go.

What is it about dreaming of new worlds that encourages the loner in us, the mooncalf, the cave-dweller, wedded to ascetism, always shying from the light?

After the first Moon landing, the philosopher (and sometime Nazi supporter) Martin Heidegger said in interview, “I at any rate was frightened when I saw pictures coming from the moon to the earth… The uprooting of man has already taken place. The only thing we have left is purely technological relationships. This is no longer the earth on which man lives.”

Heidegger’s worries need a little unpacking, and for that we turn to Morton’s cool, melancholy The Moon: A History for the Future. Where Stone and Anders collate and interpret, Morton contemplates and introspects. Stone and Anders are no stylists. Morton’s flights of informed fancy include a geological formation story for the moon that Von Trier’s film Melancholy cannot rival for spectacle and sentiment.

Stone and Anders stand with Walter Cronkite whose puzzled response to young people’s opposition to Apollo — “How can anybody turn off from a world like this?” — stands as an epitaph for Apollo’s orphans everywhere. Morton, by contrast, does understand why it’s proved so easy for us to switch off from the Moon. At any rate he has some good ideas.

Gertrude Stein, never a fan of Oakland, once wrote of the place, “There is no there there.” If Morton’s right she should have tried the Moon, a place whose details “mostly make no sense.”

“The landscape,” Morton explains, “may have features that move one into another, slopes that become plains, ridges that roll back, but they do not have stories in the way a river’s valley does. It is, after all, just the work of impacts. The Moon’s timescape has no flow; just punctuation.”

The Moon is Heidegger’s nightmare realised. It can never be a world of experience. It can only be a physical environment to be coped with technologically. It’s dumb, without a story of its own to tell, so much “in need of something but incapable of anything”, in Morton’s telling phrase, that you can’t even really say that it’s dead.

So why did we go there, when we already knew that it was, in the words of US columnist Milton Mayer, a “pulverised rubble… like Dresden in May or Hiroshima in August”?

Apollo was the US’s biggest, brashest entry in its heart-stoppingly exciting – and terrifying – political and technological competition with the Soviet Union. This is the matter of Stone and Anders’s Chasing the Moon, as a full a history as one could wish for, clear-headed about the era and respectful of the extraordinary efforts and qualities of the people involved.

But while Morton is no less moved by Apollo’s human adventure, we turn to his book for a cooler and more distant view. Through Morton’s eyes we begin to see, not only what the moon actually looks like (meaningless, flat, gentle, a South Downs gone horribly wrong) but why it conjures so much disbelief in those who haven’t been there.

A year after the first landing the novelist Norman Mailer joked: “In another couple of years there will be people arguing in bars about whether anyone even went to the Moon.” He was right. Claims that the moon landing were fake arose the moment the Saturn Vs stopped flying in 1972, and no wonder. In a deep and tragic sense Apollo was fake, in the sense that it didn’t deliver the world it had promised.

And let’s be clear here: the world it promised would have been wonderful. Never mind the technology: that was never the core point. What really mattered was that at the height of the Vietnam war, we seemed at last to have found that wonderful moral substitute for war. “All of the universe doesn’t care if we exist or not,” Ray Bradbury wrote, “but we care if we exist… This is the proper war to fight.”

Why has space exploration not united the world around itself? It’s easy to blame ourselves and our lack of vision. “It’s unfortunate,” Lyndon Johnson once remarked to the astronaut Wally Schirra, “but the way the American people are, now that they have developed all of this capability, instead of taking advantage of it, they’ll probably just piss it all away…” This is the mordant lesson of Stone and Andres’s otherwise uplifting Chasing the Moon.

Oliver Morton’s The Moon suggests a darker possibility: that the fault lies with the Moon itself, and, by implication, with everything that lies beyond our little home.

Morton’s Moon is a place defined by absences, gaps, and silence. He makes a poetry of it, for a while, he toys with thoughts of future settlement, he explores the commercial possibilities. In the end, though, what can this uneventful satellite of ours ever possibly be, but what it is: “just dry rocks jumbled”?