Tyrants and geometers

Reading Proof!: How the World Became Geometrical by Amir Alexander (Scientific American) for the Telegraph, 7 November 2019

The fall from grace of Nicolas Fouquet, Louis XIV’s superintendant of finances, was spectacular and swift. In 1661 he held a fete to welcome the king to his gardens at Vaux-le-Vicomte. The affair was meant to flatter, but its sumptuousness only served to convince the absolutist monarch that Fouquet was angling for power. “On 17 August, at six in the evening Fouquet was the King of France,” Voltaire observed; “at two in the morning he was nobody.”

Soon afterwards, Fouquet’s gardens were grubbed up in an act, not of vandalism, but of expropriation: “The king’s men carefully packed the objects into crates and hauled them away to a marshy town where Louis was intent on building his own dream palace,” the Israeli-born US historian Amir Alexander tells us. “It was called Versailles.”

Proof! explains how French formal gardens reflected, maintained and even disseminated the political ideologies of French monarchs. from “the Affable” Charles VIII in the 15th century to poor doomed Louis XVI, destined for the guillotine in 1793. Alexander claims these gardens were the concrete and eloquent expression of the idea that “geometry was everywhere and structured everything — from physical nature to human society, the state, and the world.”

If you think geometrical figures are abstract artefacts of the human mind, think again. Their regularities turn up in the natural world time and again, leading classical thinkers to hope that “underlying the boisterous chaos and variety that we see around us there may yet be a rational order, which humans can comprehend and even imitate.”

It is hard for us now to read celebrations of nature into the rigid designs of 16th century Fontainebleau or the Tuileries, but we have no problem reading them as expressions of political power. Geometers are a tyrant’s natural darlings. Euclid spent many a happy year in Ptolemaic Egypt. King Hiero II of Syracuse looked out for Archimedes. Geometers were ideologically useful figures, since the truths they uncovered were static and hierarchical. In the Republic, Plato extols the virtues of geometry and advocates for rigid class politics in practically the same breath.

It is not entirely clear, however, how effective these patterns actually were as political symbols. Even as Thomas Hobbes was modishly emulating the logical structure of Euclid’s (geometrical) Elements in the composition of his (political) Leviathan (demonstrating, from first principles, the need for monarchy), the Duc de Saint-Simon, a courtier and diarist, was having a thoroughly miserable time of it in the gardens of Louis XIV’s Versailles: “the violence everywhere done to nature repels and wearies us despite ourselves,” he wrote in his diary.

So not everyone was convinced that Versailles, and gardens of that ilk, revealed the inner secrets of nature.

Of the strictures of classical architecture and design, Alexander comments that today, “these prescriptions seem entirely arbitrary”. I’m not sure that’s right. Classical art and architecture is beautiful, not merely for its antiquity, but for the provoking way it toys with the mechanics of visual perception. The golden mean isn’t “arbitrary”.

It was fetishized, though: Alexander’s dead right about that. For centuries, Versailles was the ideal to which Europe’s grand urban projects aspired, and colonial new-builds could and did out-do Versailles, at least in scale. Of the work of Lutyens and Baker in their plans for the creation of New Delhi, Alexander writes: “The rigid triangles, hexagons, and octagons created a fixed, unalterable and permanent order that could not be tampered with.”

He’s setting colonialist Europe up for a fall: that much is obvious. Even as New Delhi and Saigon’s Boulevard Norodom and all the rest were being erected, back in Europe mathematicians Janos Bolyai, Carl Friedrich Gauss and Bernhard Riemann were uncovering new kinds of geometry to describe any curved surface, and higher dimensions of any order. Suddenly the rigid, hierarchical order of the Euclidean universe was just one system among many, and Versailles and its forerunners went from being diagrams of cosmic order to being grand days out with the kids.

Well, Alexander needs an ending, and this is as good a place as any to conclude his entertaining, enlightening, and admirably well-focused introduction to a field of study that, quite frankly, is more rabbit-hole than grass.

I was in Washington the other day, sweating my way up to the Lincoln Memorial. From the top I measured the distance, past the needle of the Washington Monument, to Capitol Hill. Major Pierre Charles L’Enfant built all this: it’s a quintessential product of the Versailles tradition. Alexander calls it “nothing less than the Constitutional power structure of the United States set in stone, pavement, trees, and shrubs.”

For nigh-on 250 years tourists have been slogging from one end of the National Mall to the other, re-enacting the passion of the poor Duc de Saint-Simon in Versailles, who complained that “you are introduced to the freshness of the shade only by a vast torrid zone, at the end of which there is nothing for you but to mount or descend.”

Not any more, though. Skipping down the steps, I boarded a bright red electric Uber scooter and sailed electrically east toward Capitol Hill. The whole dignity-dissolving charade was made possible (and cheap) by map-making algorithms performing geometrical calculations that Euclid himself would have recognised. Because the ancient geometer’s influence on our streets and buildings hasn’t really vanished. It’s been virtualised. Algorithmized. Turned into a utility.

Now geometry’s back where it started: just one more invisible natural good.

Pig-philosophy

Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.

Normal fish and stubby dinosaurs

Reading Imagined Life by James Trefil and Michael Summers for New Scientist, 20 September 2019

If you can imagine a world that is consistent with the laws of physics,” say physicist James Trefil and planetary scientist Michael Summers, “then there’s a good chance that it exists somewhere in our galaxy.”

The universe is dark, empty, and expanding, true. But the few parts of it that are populated by matter at all, are full of planets. Embarrassingly so: interstellar space itself is littered with hard-to-spot rogue worlds, ejected early on in their solar system’s history, and these worlds may outnumber orbiting planets by a factor of two to one. (Not everyone agrees: some experts reckon rogues may out-number orbital worlds 1000 to one. One of the reasons the little green men have yet to sail up to the White House, is that they keep hitting space shoals.)

Can we conclude, then, that this cluttered galaxy is full of life? The surprising (and frustrating) truth is that we genuinely have no idea. And while Trefil and Summers are obviously primed to receive with open arms any visitors who happen by, they do a splendid job, in this, their second slim volume together of explaining just how tentative and speculative our thoughts about exobiology actually are, and why.

Exoplanets came out in 2013; Imagined Life is a sort of sequel and is, if possible, even more accessible. In just 14 pages, the authors outline the physical laws constraining the universe. Then they rattle through the various ways we can define life, and why spotting life on distant worlds is so difficult (“For just about every molecule that we could identify [through spectroscopy] as a potential biomarker of life on an exoplanet, there is a nonbiological production mechanism.”). They list the most likely types of environment on which life may have evolved, from water worlds to Mega Earths (expect “normal fish… and stubby dinosaurs”), from tidally locked planets to wildly exotic (but by no means unlikely) superconducting rogues. And we haven’t even reached the meat of this tiny book yet – a tour, planet by imaginary planet, of the possibilities for life, intelligence, and civilisation in our and other galaxies.

Most strange worlds are far too strange for life, and the more one learns about chemistry, the more sober one’s speculations become. Water is common in the universe, and carbon not hard to find, and this is as well, given the relative uselessness of their nearest equivalents (benzene and silicon, say). The authors argue enthusiastically for the possibilities of life that’s “really not like us”, but they have a hard time making it stick. Carbon-based life is pretty various, of course, but even here there may be unexepected limits on what’s possible. Given that, out of 140 amino acids, only 22 have been recruited in nature, it may be that mechanisms of inheritance converge on a surprisingly narrow set of possibilities.

The trick to finding life in odd places, we discover, is to look not out, but in, and through. “Scientists are beginning to abandon the idea that life has to evolve and persist on the surface of planets” the authors write, laying the groundwork for their description of an aquatic alien civilisation for whom a mission to the ocean surface “would be no stranger to them than a mission to Mars is to us.”

I’m not sure I buy the authors’ stock assumption that life most likely breeds intelligence most likely breeds technology. Nothing in biology , or human history, suggests as much. Humans in their current iteration may be far odder than we imagine. But what the hell: Imagined Life reminds me of those books I grew up with, full of artists’ impressions of the teeming oceans of Venus. Only now, the science is better; the writing is better; and the possibiliities, being more focused, are altogether more intoxicating.

The weather forecast: a triumph hiding in plain sight

Reading The Weather Machine by Andrew Blum (Bodley Head) for the Telegraph, 6 July 2019

Reading New York journalist Andrew Blum’s new book has cured me of a foppish and annoying habit. I no longer dangle an umbrella off my arm on sunny days, tripping up my fellow commuters before (inevitably) mislaying the bloody thing on the train to Coulsdon Town. Very late, and to my considerable embarrassment, I have discovered just how reliable the weather forecast is.

My thoroughly English prejudice against the dark art of weather prediction was already set by the time the European Centre for Medium-Range Weather Forecasts opened in Reading in 1979. Then the ECMWF claimed to be able to see three days into the future. Six years later, it could see five days ahead. It knew about Sandy, the deadliest hurricane of 2012, eight days ahead, and it expects to predict high-impact events a fortnight before they happen by the year 2025.

The ECMWF is a world leader, but it’s not an outlier. Look at the figures: weather forecasts have been getting consistently better for 40 straight years. Blum reckons this makes the current global complex of machines, systems, networks and acronyms (and there are lots of acronyms) “a high point of science and technology’s aspirations for society”.

He knows this is a minority view: “The weather machine is a wonder we treat as a banality,” he writes: “a tool that we haven’t yet learned to trust.” The Weather Machine is his attempt to convey the technical brilliance and political significance of an achievement that hides in plain sight.

The machine’s complexity alone is off all familiar charts, and sets Blum significant challenge. “As a rocket scientist at the Jet Propulsion Laboratory put it to me… landing a spacecraft on Mars requires dealing with hundreds of variables,” he writes; “making a global atmospheric model requires hundreds of thousands.” Blum does an excellent job of describing how meteorological theory and observation were first stitched together, and why even today their relationship is a stormy one.

His story opens in heroic times, with Robert FitzRoy one of his more engaging heroes. Fitzroy is best remembered for captaining the HMS Beagle and weathering the puppyish enthusiasm of a young Charles Darwin. But his real claim to fame is as a meteorologist. He dreamt up the term “forecast”, turned observations into predictions that saved sailors’ lives, and foresaw with clarity what a new generation of naval observers would look like. Distributed in space and capable of communicating instantaneously with each other, they would be “as if an eye in space looked down on the whole North Atlantic”.

You can’t produce an accurate forecast from observation alone, however. You also need a theory of how the weather works. The Norwegian physicist Vilhelm Bjerknes came up with the first mathematical model of the weather: a set of seven interlinked partial differential equations that handled the fact that the atmosphere is a far from ideal fluid. Sadly, Bjerknes’ model couldn’t yet predict anything — as he himself said, solutions to his equations “far exceed the means of today’s mathematical analysis”. As we see our models of the weather evolve, so we see works of individual genius replaced by systems of machine computation. In the observational realm, something similar happens: the heroic efforts of individual observers throw up trickles of insight that are soon subsumed in the torrent of data streaming from the orbiting artefacts of corporate and state engineering.

The American philosopher Timothy Morton dreamt up the term “hyperobject” to describe things that are too complex and numinous to describe in the plain terms. Blum, whose earlier book was Tubes: Behind the Scenes at the Internet (2012), fancies his chances at explaining human-built hyperobjects in solid, clear terms, without recourse to metaphor and poesy. In this book, for example, he recognises the close affinity of military and meteorological infrastructures (the staple of many a modish book on the surveillance state), but resists any suggestion that they are the same system.

His sobriety is impressive, given how easy it is to get drunk on this stuff. In October 1946, technicians at the White Sands Proving Ground in Nevada installed a camera in the nose cone of a captured V2, and by launching it, yielded photographs of a quarter of the US — nearly a million square miles banded by clouds “stretching hundreds of miles in rows like streets”. This wasn’t the first time a bit of weather kit acted as an expendable test in a programme of weapons development, and it certainly wasn’t the last. Today’s global weather system has not only benefited from military advancements in satellite positioning and remote sensing; it has made those systems possible. Blum allows that “we learned to see the whole earth thanks to the technology built to destroy the whole earth”. But he avoids paranoia.

Indeed, he is much more impressed by the way countries at hammer and tongs with each other on the political stage nevertheless collaborated closely and well on a global weather infrastructure. Point four of John F Kennedy’s famous 1961 speech on “Urgent National Needs” called for “a satellite system for worldwide weather observation”, and it wasn’t just militarily useful American satellites he had in mind for the task: in 1962 Harry Wexler of the U.S. Weather Bureau worked with his Soviet counterpart Viktor Bugaev on a report proposing a “World Weather Watch”, and by 1963 there was, Blum finds, “a conscious effort by scientists — on both sides of the Iron Curtain, in all corners of the earth — to design an integrated and coordinated apparatus” — this at a time when weather satellites were so expensive they could be justified only on national security grounds.

Blum’s book comes a little bit unstuck at the end. A final chapter that could easily have filled a third of the book is compressed into just a few pages’ handwaving and special pleading, as he conjures up a vision of a future in which the free and global nature of weather information has ceased to be a given and the weather machine, that “last bastion of international cooperation”, has become just one more atomised ghost of a future the colonial era once promised us.

Why end on such a minatory note? The answer, which is by no means obvious, is to be found in Reading. Today 22 nations pay for the ECMWF’s maintenance of a pair of Cray supercomputers. The fastest in the world, these machines must be upgraded every two years. In the US, meanwhile, weather observations rely primarily on the health of four geostationary satellites, at a cost of 11 billion dollars. (America’s whole National Weather Service budget costs only around $1billion.)

Blum leaves open the question, How is an organisation built by nation-states, committed to open data and borne of a global view, supposed to work in a world where information lives on private platforms and travels across private networks — a world in which billions of tiny temperature and barometric sensors, “in smartphones, home devices, attached to buildings, buses or airliners,” are aggregated by the likes of Google, IBM or Amazon?

One thing is disconcertingly clear: Blum’s weather machine, which in one sense is a marvel of continuing modernity, is also, truth be told, a dinosaur. It is ripe for disruption, of a sort that the world, grown so reliant on forecasting, could well do without.

All the ghosts in the machine

Reading All the Ghosts in the Machine: Illusions of immortality in the digital age by Elaine Kasket for New Scientist, 22 June 2019

Moving first-hand interviews and unnervingly honest recollections weave through psychologist Elaine Kasket’s first mainstream book, All the Ghosts in the Machine, an anatomy of mourning in the digital age. Unravelling that architecture turns up two distinct but complementary projects.

The first offers some support and practical guidance for people (and especially family members) who are blindsided by the practical and legal absurdities generated when people die in the flesh, while leaving their digital selves very much alive.

For some, the persistence of posthumous data, on Facebook, Instagram or some other corner of the social media landscape, is a source of “inestimable comfort”. For others, it brings “wracking emotional pain”. In neither case is it clear what actions are required, either to preserve, remove or manage that data. As a result, survivors usually oversee the profiles of the dead themselves – always assuming, of course, that they know their passwords. “In an effort to keep the profile ‘alive’ and to stay connected to their dead loved one,” Kasket writes, “a bereaved individual may essentially end up impersonating them.”

It used to be the family who had privileged access to the dead, to their personal effects, writings and photographs. Families are, as a consequence, disproportionately affected by the persistent failure of digital companies to distinguish between the dead and the living.

Who has control over a dead person’s legacy? What unspoken needs are being trammelled when their treasured photographs evaporate or, conversely, when their salacious post-divorce Tinder messages are disgorged? Can an individual’s digital legacy even be recognised for what it is in a medium that can’t distinguish between life and death?

Kasket’s other project is to explore this digital uncanny from a psychoanalytical perspective. Otherwise admirable 19th-century ideals of progress, hygiene and personal improvement have conned us into imagining that mourning is a more or less understood process of “letting go”. Kasket’s account of how this idea gained currency is a finely crafted comedy of intellectual errors.

In fact, grief doesn’t come in stages, and our relationships with the dead last far longer than we like to imagine. All the Ghosts in the Machine opens with an account of the author’s attempt to rehabilitate her grandmother’s bitchy reputation by posting her love letters on Instagram.

“I took a private correspondence that was not intended for me and transformed it from its original functions. I wanted it to challenge others’ ideas, and to affect their emotions… Ladies and gentlemen of today, I present to you the deep love my grandparents held for one another in 1945, ‘True romance’, heart emoticon.”

Eventually, Kasket realised that the version of her grandmother her post had created was no more truthful than the version that had existed before. And by then, of course, it was far too late.

The digital persistence of the dead is probably a good thing in these dissociated times. A culture of continuing bonds with the dead is much to be preferred over one in which we are all expected to “get over it”. But, as Kasket observes, there is much work to do, for “the digital age has made continuing bonds easier and harder all at the same time.”

“A wonderful moral substitute for war”

Reading Oliver Morton’s The Moon and Robert Stone and Alan Adres’s Chasing the Moon for The Telegraph, 18 May 2019

I have Arthur to thank for my earliest memory: being woken and carried into the living room on 20 July 1969 to see Neil Armstrong set foot on the moon.

Arthur is a satellite dish, part of the Goonhilly Earth Satellite Station in Cornwall. It carried the first ever transatlantic TV pictures from the USA to Europe. And now, in a fit of nostalgia, I am trying to build a cardboard model of the thing. The anniversary kit I bought comes with a credit-card sized Raspberry Pi computer that will cause a little red light to blink at the centre of the dish, every time the International Space Station flies overhead.

The geosychronous-satellite network that Arthur Clarke envisioned in 1945 came into being at the same time as men landed on the Moon. Intelsat III F-3 was moved into position over the Indian Ocean a few days before Apollo 11’s launch, completing the the world’s first geostationary-satellite network. The Space Race has bequeathed us a world steeped in fractured televisual reflections of itself.

Of Apollo itself, though, what actually remains? The Columbia capsule is touring the United States: it’s at Seattle’s Museum of Flight for this year’s fiftieth anniversary. And Apollo’s Mission Control Center in Houston is getting a makeover, its flight control consoles refurbished, its trash cans, book cases, ashtrays and orange polyester seat cushions all restored.

On the Moon there are some flags; some experiments, mostly expired; an abandoned car.

In space, where it matters, there’s nothing. The intention had been to build moon-going craft in orbit. This would have involved building a space station first. In the end, spooked by a spate of Soviet launches, NASA decided to cut to the chase, sending two small spacecraft up on a single rocket. One got three astronauts to the moon. The other, a tiny landing bug (standing room only) dropped two of them onto the lunar surface and puffed them back up into lunar orbit, where they rejoined the command module and headed home. It was an audacious, dangerous and triumphant mission — but it left nothing useful or reuseable behind.

In The Moon: A history for the future, science writer Oliver Morton observes that without that peculiar lunar orbital rendezvous plan, Apollo would at least have left some lasting infrastructure in orbit to pique someone’s ambition. As it was, “Every Apollo mission would be a single shot. Once they were over, it would be in terms of hardware — even, to a degree, in terms of expertise — as if they had never happened.”

Morton and I belong to the generation sometimes dubbed Apollo’s orphans. We grew up (rightly) dazzled by Apollo’s achievement. It left us, however, with the unshakable (and wrong) belief that our enthusiasm was common, something to do with what we were taught to call humanity’s “outward urge”. The refrain was constant: how in people there was this inborn desire to leave their familiar surroundings and explore strange new worlds.

Nonsense. Over a century elapsed between Columbus’s initial voyage and the first permanent English settlements. One of the more surprising findings of recent researches into the human genome is that, left to their own devices, people hardly move more than a few weeks’ walking distance from where they were born.

This urge, that felt so visceral, so essential to one’s idea of oneself: how could it possibly turn out to be the psychic artefact of a passing political moment?

Documentary makers Robert Stone and Alan Andres answer that particular question in Chasing the Moon, a tie in to their forthcoming series on PBS. It’s a comprehensive account of the Apollo project, and sends down deep roots: to the cosmist speculations of fin de siecle Russia, the individualist eccentricities of Germanys’ Verein fur Raumschiffart (Space Travel Society), and the deceptively chummy brilliance of the British Interplanetary Society, who used to meet in the pub.

The strength of Chasing the Moon lies not in any startling new information it divulges (that boat sailed long ago) but in the connections it makes, and the perspectives it brings to bear. It is surprising to find the New York Times declaring, shortly after the Bay of Pigs fiasco, that Kennedy isn’t nearly as interested in building a space programme as he should be. (“So far, apparently, no one has been able to persuade President Kennedy of the tremendous political, psychological, and prestige importance, entirely apart from the scientific and military results, of an impressive space achievement.”) And it is worthwhile to be reminded that, less than a month after his big announcement, Kennedy was trying to persuade Khrushchev to collaborate on the Apollo project, and that he approached the Soviets with the idea a second time, just days before his assassination in Dallas.

For Kennedy, Apollo was a strategic project, “a wonderful moral substitute for war ” (to slightly misapply Ray Bradbury’s phrase), and all to do with manned missions. NASA administrator James Webb, on the other hand, was a true believer. He could see no end to the good big organised government projects could achieve by way of education and science and civil development. In his modesty and dedication, Webb resembled no-one so much as the first tranche of bureaucrat-scientists in the Soviet Union. He never featured on a single magazine cover, and during his entire tenure he attended only one piloted launch from Cape Kennedy. (“I had a job to do in Washington,” he explained.)

The two men worked well enough together, their priorities dovetailing neatly in the role NASA took in promoting the Civil Rights Act and the Voting Rights Act and the government’s equal opportunities program. (NASA’s Saturn V designer, the former Nazi rocket scientist Wernher Von Braun, became an unlikely and very active campaigner, the New York Times naming him “one of the most outspoken spokesmen for racial moderation in the South.”) But progress was achingly slow.

At its height, the Apollo programme employed around two per cent of the US workforce and swallowed four per cent of its GDP. It was never going to be agile enough, or quotidian enough, to achieve much in the area of effecting political change. There were genuine attempts to recruit and train a black pilot for the astronaut programme. But comedian Dick Gregory had the measure of this effort: “A lot of people was happy that they had the first Negro astronaut, Well, I’ll be honest with you, not myself. I was kind of hoping we’d get a Negro airline pilot first.”

The big social change the Apollo program did usher in was television. (Did you know that failing to broadcast the colour transmissions from Apollo 11 proved so embarrassing to the apartheid government in South Africa that they afterwards created a national television service?)

But the moon has always been a darling of the film business. Never mind George Melie’s Trip to the Moon. How about Fritz Lang ordering a real rocket launch for the premiere of Frau im Mond? This was the film that followed Metropolis, and Lang roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre tall liquid-propellant rocket came to nought, Oberth set about building one eleven metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily they ran out of money.

The Verein für Raumschiffahrt was founded by men who had acted as scientific consultants on Frau im Mond. Von Braun became one of their number, before he was whisked away by the Nazis to build rockets for the war effort. Without Braun, the VfR grew nuttier by the year. Oberth, who worked for a time in the US after the war, went the same way, his whole conversation swallowed by UFOs and extraterrestrials and glimpses of Atlantis. When he went back to Germany, no-one was very sorry to see him go.

What is it about dreaming of new worlds that encourages the loner in us, the mooncalf, the cave-dweller, wedded to ascetism, always shying from the light?

After the first Moon landing, the philosopher (and sometime Nazi supporter) Martin Heidegger said in interview, “I at any rate was frightened when I saw pictures coming from the moon to the earth… The uprooting of man has already taken place. The only thing we have left is purely technological relationships. This is no longer the earth on which man lives.”

Heidegger’s worries need a little unpacking, and for that we turn to Morton’s cool, melancholy The Moon: A History for the Future. Where Stone and Anders collate and interpret, Morton contemplates and introspects. Stone and Anders are no stylists. Morton’s flights of informed fancy include a geological formation story for the moon that Von Trier’s film Melancholy cannot rival for spectacle and sentiment.

Stone and Anders stand with Walter Cronkite whose puzzled response to young people’s opposition to Apollo — “How can anybody turn off from a world like this?” — stands as an epitaph for Apollo’s orphans everywhere. Morton, by contrast, does understand why it’s proved so easy for us to switch off from the Moon. At any rate he has some good ideas.

Gertrude Stein, never a fan of Oakland, once wrote of the place, “There is no there there.” If Morton’s right she should have tried the Moon, a place whose details “mostly make no sense.”

“The landscape,” Morton explains, “may have features that move one into another, slopes that become plains, ridges that roll back, but they do not have stories in the way a river’s valley does. It is, after all, just the work of impacts. The Moon’s timescape has no flow; just punctuation.”

The Moon is Heidegger’s nightmare realised. It can never be a world of experience. It can only be a physical environment to be coped with technologically. It’s dumb, without a story of its own to tell, so much “in need of something but incapable of anything”, in Morton’s telling phrase, that you can’t even really say that it’s dead.

So why did we go there, when we already knew that it was, in the words of US columnist Milton Mayer, a “pulverised rubble… like Dresden in May or Hiroshima in August”?

Apollo was the US’s biggest, brashest entry in its heart-stoppingly exciting – and terrifying – political and technological competition with the Soviet Union. This is the matter of Stone and Anders’s Chasing the Moon, as a full a history as one could wish for, clear-headed about the era and respectful of the extraordinary efforts and qualities of the people involved.

But while Morton is no less moved by Apollo’s human adventure, we turn to his book for a cooler and more distant view. Through Morton’s eyes we begin to see, not only what the moon actually looks like (meaningless, flat, gentle, a South Downs gone horribly wrong) but why it conjures so much disbelief in those who haven’t been there.

A year after the first landing the novelist Norman Mailer joked: “In another couple of years there will be people arguing in bars about whether anyone even went to the Moon.” He was right. Claims that the moon landing were fake arose the moment the Saturn Vs stopped flying in 1972, and no wonder. In a deep and tragic sense Apollo was fake, in the sense that it didn’t deliver the world it had promised.

And let’s be clear here: the world it promised would have been wonderful. Never mind the technology: that was never the core point. What really mattered was that at the height of the Vietnam war, we seemed at last to have found that wonderful moral substitute for war. “All of the universe doesn’t care if we exist or not,” Ray Bradbury wrote, “but we care if we exist… This is the proper war to fight.”

Why has space exploration not united the world around itself? It’s easy to blame ourselves and our lack of vision. “It’s unfortunate,” Lyndon Johnson once remarked to the astronaut Wally Schirra, “but the way the American people are, now that they have developed all of this capability, instead of taking advantage of it, they’ll probably just piss it all away…” This is the mordant lesson of Stone and Andres’s otherwise uplifting Chasing the Moon.

Oliver Morton’s The Moon suggests a darker possibility: that the fault lies with the Moon itself, and, by implication, with everything that lies beyond our little home.

Morton’s Moon is a place defined by absences, gaps, and silence. He makes apoetry of it, for a while, he toys with thoughts of future settlement, he explores the commercial possibilities. In the end, though, what can this uneventful satellite of ours ever possibly be, but what it is: “just dry rocks jumbled”?

 

 

Asking for it

Reading The Metric Society: On the Quantification of the Social by Steffen Mau (Polity Press) for the Times Literary Supplement, 30 April 2019 

Imagine Steffen Mau, a macrosociologist (he plays with numbers) at Humboldt University of Berlin, writing a book about information technology’s invasion of the social space. The very tools he uses are constantly interrupting him. His bibliographic software wants him to assign a star rating to every PDF he downloads. A paper-sharing site exhorts him repeatedly to improve his citation score (rather than his knowledge). In a manner that would be funny, were his underlying point not so serious, Mau records how his tools keep getting in the way of his job.

Why does Mau use these tools at all? Is he too good for a typewriter? Of course he is: the whole history of civilisation is the story of us getting as much information as possible out of our heads and onto other media. It’s why, nigh-on 5000 years ago, the Sumerians dreamt up the abacus. Thinking is expensive. How much easier to stop thinking, and rely on data records instead!

The Metric Society, is not a story of errors made, or of wrong paths taken. This is a story, superbly reduced to the chill essentials of an executive summary, of how human society is getting exactly what it’s always been asking for. The last couple of years have seen more than 100 US cities pledge to use evidence and data to improve their decision-making. In the UK, “What Works Centres”, first conceived in the 1990s, are now responsible for billions in funding. The acronyms grow more bellicose, the more obscure they become. In the UK, the Alliance for Useful Evidence (with funding from ESRC, Big Lottery and Nesta) champions the use of evidence in social policy and practice.

Mau describes the emergence of a society trapped in “data-driven perpetual stock-taking”, in which the new Juggernaut of auditability lays waste to creativity, production, and even simple efficiency. “The magic attraction of numbers and comparisons is simply irresistible,” Mau writes.

It’s understandable. Our first great system of digital abstraction, money, enabled a more efficient and less locally bound exchange of good and services, and introduced a certain level of rational competition into the world of work.

But look where money has led us! Capital is not the point here. Neither is capitalism. The point is our relationship with information. Amazon’s algorithms are sucking all the localism out of the retail system, to the point where whole high streets have vanished — and entire communities with them. Amazon is in part powered by the fatuous metricisation of social variety through systems of scores, rankings, likes, stars and grades, which are (not coincidentally) the methods by which social media structures — from clownish Twitter to China’s Orwellian Social Credit System — turn qualitative differences into quantitative inequalities.

Mau leaves us thoroughly in the lurch. He’s a diagnostician, not a snake-oil salesman, and his bedside manner is distinctly chilly. Dazzled by data, which have relieved us of the need to dream and imagine, we fight for space on the foothills of known territory. The peaks our imaginations might have trod — as a society, and as a species — tower above us, ignored.

“The English expedition of 1919 is to blame for this whole misery”

Four books to celebrate the centenary of  Eddington’s 1919 eclipse observations. For The Spectator, 11 May 2019.

Einstein’s War: How relativity triumphed amid the vicious nationalism of World War I
Matthew Stanley
Dutton

Gravity’s Century: From Einstein’s eclipse to images of black holes
Ron Cowen
Harvard University Press

No Shadow of a Doubt
Daniel Kennefick
Princeton University Press

Einstein’s Wife: The real story of Mileva Einstein-Maric
Allen Esterson and David C Cassidy; contribution by Ruth Lewin Sime.
MIT Press

On 6 November 1919, at a joint meeting of the Royal Astronomical Society and the Royal Society, held at London’s Burlington House, the stars went all askew in the heavens.
That, anyway, was the rhetorical flourish with which the New York Times hailed the announcement of the results of a pair of astronomical expeditions conducted in 1919, after the Armistice but before the official end of the Great War. One expedition, led by Arthur Stanley Eddington, assistant to the Astronomer Royal, had repaired to the plantation island of Principe off the coast of West Africa; the other, led by Andrew Crommelin, who worked at the Royal Greenwich Observatory, headed to a racecourse in Brazil. Together, in the few minutes afforded by the 29 May solar eclipse, the teams used telescopes to photograph shifts in the apparent location of stars as the edge of the sun approached them.

The possibility that a heavy body like the sun might cause some distortion in the appearance of the star field was not particularly outlandish. Newton, who had assigned “corpuscles” of light some tiny mass, supposed that such a massive body might draw light in like a lens, though he imagined the effect was too slight to be observable.

The degree of distortion the Eddington expeditions hoped to observe was something else again. 1.75 arc-seconds is roughly the angle subtended by a coin, a couple of miles away: a fine observation, but not impossible at the time. Only the theory of the German-born physicist Albert Einstein — respected well enough at home but little known to the Anglophone world — would explain such a (relatively) large distortion, and Eddington’s confirmation of his hypothesis brought the “famous German physician” (as the New York Times would have it) instant celebrity.

“The English expedition of 1919 is ultimately to blame for this whole misery, by which the general masses seized possession of me,” Einstein once remarked; but he was not so very sorry for the attention. Forget the usual image of Einstein the loveable old eccentric. Picture instead a forty-year-old who, when he steps into a room, literally causes women to faint. People wanted his opinions even about stupid things. And for years, if anyone said anything wise, within a few months their words were being attributed to Einstein.

“Why is it that no one understands me and everyone likes me?” Einstein wondered. His appeal lay in his supposed incomprehensibility. Charlie Chaplin understood: “They cheer me because they all understand me,” he remarked, accompanying the theoretical physicist to a film premiere, “and they cheer you because no one understands you.”

Several books serve to mark the centenary of the 1919 eclipse observations. Though their aims diverge, they all to some degree capture the likeness of Einstein the man, messy personal life and all, while rendering his physics a little bit more comprehensible to the rest of us. Each successfully negotiates the single besetting difficulty facing books of this sort, namely the way science lends itself to bad history.

Science uses its past as an object lesson, clearing all the human messiness away to leave the ideas standing. History, on the other hand factors in as much human messiness as possible to show how the business of science is as contingent and dramatic as any other human activity.

While dealing with human matters, some ambiguity over causes and effects is welcome. There are two sides to every story, and so on and so forth: any less nuanced approach seems suspiciously moralistic. One need only look at the way various commentators have interpreted Einstein’s relationship with his first wife.

Einstein was, by the end of their failing marriage, notoriously horrible to Mileva Einstein-Maric; this in spite of their great personal and intellectual closeness as first-year physics students at the Federal Swiss Polytechnic. Einstein once reassured Elsa Lowenthal, his cousin and second-wife-to-be, that “I treat my wife as an employee I can not fire.” (Why Elsa, reading that, didn’t run a mile, is not recorded.)

Albert was a bad husband. His wife was a mathematician. Therefore Albert stole his theory of special relativity from Mileva. This shibboleth, bandied about since the 1970s, is a sort of of evil twin of whig history, distorted by teleology, anachronism and present-mindedness. It does no one any favours. The three separately authored parts of Einstein’s Wife: The real story of Mileva Einstein-Maric unpick the myth of Mileva’s influence over Albert, while increasing, rather than diminishing, our interest in and admiration of the woman herself. It’s a hard job to do well, without preciousness or special pleading, especially in today’s resentment-ridden and over-sensitive political climate, and the book is an impressive, compassionate accomplishment.
Matthew Stanley’s Einstein’s War, on the other hand, tips ever so slightly in the other direction, towards the simplistic and the didactic. His intentions, however, are benign — he is here to praise Einstein and Eddington and their fellows, not bury them — and his slightly on-the-nose style is ultimately mandated by the sheer scale of what he is trying to do, for he succeeds in wrapping the global, national and scientific politics of an era up in a compelling story of one man’s wild theory, lucidly sketched, and its experimental confirmation in the unlikeliest and most exotic circumstances.

The world science studies is truly a blooming, buzzing confusion. It is not in the least bit causal, in the ordinary human sense. Far from there being a paucity of good stories in science, there are a limitless number of perfectly valid, perfectly accurate, perfectly true stories, all describing the same phenomenon from different points of view.

Understanding the stories abroad in the physical sciences at the fin de siecle, seeing which ones Einstein adopted, why he adopted them, and why, in some cases, he swapped them for others, certainly doesn’t make his theorising easy. But it does give us a gut sense of why he was so baffled by the public’s response to his work. The moment we are able to put him in the context of co-workers, peers and friends, we see that Einstein was perfecting classical physics, not overthrowing it, and that his supposedly peculiar theory of relativity — as the man said himself –“harmonizes with every possible outlook of philosophy and does not interfere with being an idealist or materialist, pragmatist or whatever else one likes.”

In science, we need simplification. We welcome a didactic account. Choices must be made, and held to. Gravity’s Century by the science writer Ron Cowen is the most condensed of the books mentioned here; it frequently runs right up to the limit of how far complex ideas can be compressed without slipping into unavoidable falsehood. I reckon I spotted a couple of questionable interpretations. But these were so minor as to be hardly more than matters of taste, when set against Cowen’s overall achievement. This is as good a short introduction to Einstein’s thought as one could wish for. It even contrives to discuss confirmatory experiments and observations whose final results were only announced as I was writing this piece.

No Shadow of a Doubt is more ponderous, but for good reason: the author Daniel Kennefick, an astrophysicist and historian of science, is out to defend the astronomer Eddington against criticisms more serious, more detailed, and framed more conscientiously, than any thrown at that cad Einstein.

Eddington was an English pacifist and internationalist who made no bones about wanting his eclipse observations to champion the theories of a German-born physicist, even as jingoism reached its crescendo on both sides of the Great War. Given the sheer bloody difficulty of the observations themselves, and considering the political inflection given them by the man orchestrating the work, are Eddington’s results to be trusted?

Kennefick is adamant that they are, modern naysayers to the contrary, and in conclusion to his always insightful biography, he says something interesting about the way historians, and especially historians of science, tend to underestimate the past. “Scientists regard continuous improvement in measurement as a hallmark of science that is unremarkable except where it is absent,” he observes. “If it is absent, it tells us nothing except that someone involved has behaved in a way that is unscientific or incompetent, or both.” But, Kennefick observes, such improvement is only possible with practice — and eclipses come round too infrequently for practice to make much difference. Contemporary attempts to recreate Eddington’s observations face the exact same challenges Eddington did, and “it seems, as one might expect, that the teams who took and handled the data knew best after all.”

It was Einstein’s peculiar fate that his reputation for intellectual and personal weirdness has concealed the architectural elegance of his work. Higher-order explanations of general relativity have become clichés of science fiction. The way massive bodies bend spacetime like a rubber sheet is an image that saturates elementary science classes, to the point of tedium.

Einstein hated those rubber-sheet metaphors for a different reason. “Since the mathematicians pounced on the relativity theory,” he complained, “I no longer understand it myself.” We play about with thoughts of bouncy sheets. Einstein had to understand their behaviours mathematically in four dimensions (three of space and one of time), crunching equations so radically non-linear, their results would change the value of the numbers originally put into them in feedback loops that drove the man out of his mind. “Never in my life have I tormented myself anything like this,” he moaned.

For the rest of us, however, A little, prophylactic exposure to Einstein’s actual work pays huge dividends. It sweeps some of the weirdness away and reveals Einstein’s actual achievement: theories that set all the forces above the atomic scale dancing with an elegance Isaac Newton, founding father of classical physics, would have half-recognised, and wholly admired.

 

Choose-your-own adventure

Reading The Importance of Small Decisions by Michael O’Brien, R. Alexander Bentley and William Brock for New Scientist, 13 April 2019

What if you could map all kinds of human decision-making and use it to chart society’s evolution?

This is what academics Michael O’Brien, Alexander Bentley and William Brock try to do in The Importance of Small Decisions. It is an attempt to expand on a 2014 paper, “Mapping collective behavior in the big-data era”, that they wrote in Behavioral and Brain Sciences . While contriving to be somehow both too short and rambling, it bites off more than it can chew, nearly chokes to death on the ins and outs of group selection, and coughs up its best ideas in the last 40 pages.

Draw a graph. The horizontal axis maps decisions according to how socially influenced they are. The vertical axis tells you how clear the costs and pay-offs are for each decision. Rational choices sit in the north-western quadrant of the map. To the north-east, bearded capuchins teach each other how to break into palm nuts in a charming example of social learning (pictured). Twitter storms generated by fake news swirl about the south-east.

The more choices you face, the greater the cognitive load. The authors cite economist Eric Beinhocker, who in The Origin of Wealth calculated that human choices had multiplied a hundred million-fold in the past 10,000 years. Small and insignificant decisions now consume us.

Worse, costs and pay-offs are increasingly hidden in an ocean of informational white noise, so that it is easier to follow a trend than find an expert. “Why worry about the underlying causes of global warming when we can see what tens of millions of our closest friends think?” ask the authors, building to a fine, satirical climax.

In an effort to communicate widely, the authors have, I think, left out a few too many details from their original paper. And a mid-period novel by Philip K. Dick would paint a more visceral picture of a world created by too much information. Still, there is much fun to be had reading the garrulous banter of these three extremely smart academics.

Come on, Baggy, get with the beat!

Reading The Evolving Animal Orchestra: In search of what makes us musical by Henkjan Honing for New Scientist, 6 April 2019

The perception, if not the enjoyment, of musical cadences and of rhythm,” wrote Darwin in his 1871 book The Descent of Man, “is probably common to all animals.”

Henkjan Honing has tested this eminently reasonable idea, and in his book, The Evolving Animal Orchestra, he reports back. He details his disappointment, frustration and downright failure with such wit, humility and a love of the chase that any young person reading it will surely want to run away to become a cognitive scientist.

No culture has yet been found that doesn’t have music, and all music shares certain universal characteristics: melodies composed of seven or fewer discrete pitches; a regular beat; a limited sequence of rhythmic patterns. All this would suggest a biological basis for musicality.

A bird flies with regular beats of its wings. Animals walk with a particular rhythm. So you might expect beat perception to be present in everything that doesn’t want to falter when moving. But it isn’t. Honing describes experiments that demonstrate conclusively that we are the only primates with a sense of rhythm, possibly deriving from advanced beat perception.

Only strongly social animals, he writes, from songbirds and parrots to elephants and humans, have beat perception. What if musicality was acquired by all prosocial species through a process of convergent evolution? Like some other cognitive scientists, Honing now wonders whether language might derive from music, in a similar way to how reading uses much older neural structures that recognise contrast and sharp corners.

Honing must now test this exciting hypothesis. And if The Evolving Animal Orchestra is how he responds to disappointment, I can’t wait to see what he makes of success.