‘God knows what the Chymists mean by it’

Reading Antimony, Gold, and Jupiter’s Wolf: How the Elements Were Named, by
Peter Wothers, for The Spectator, 14 December 2019

Here’s how the element antimony got its name. Once upon a time (according to the 17th-century apothecary Pierre Pomet), a German monk (moine in French) noticed its purgative effects in animals. Fancying himself as a physician, he fed it to “his own Fraternity… but his Experiment succeeded so ill that every one who took of it died. This therefore was the reason of this Mineral being call’d Antimony, as being destructive of the Monks.”

If this sounds far-fetched, the Cambridge chemist Peter Wothers has other stories for you to choose from, each more outlandish than the last. Keep up: we have 93 more elements to get through, and they’re just the ones that occur naturally on Earth. They each have a history, a reputation and in some cases a folklore. To investigate their names is to evoke histories that are only intermittently scientific. A lot of this enchanting, eccentric book is about mining and piss.

The mining:

There was no reliable lighting or ventilation; the mines could collapse at any point and crush the miners; they could be poisoned by invisible vapours or blown up by the ignition of pockets of flammable gas. Add to this the stifling heat and the fact that some of the minerals themselves were poisonous and corrosive, and it really must have seemed to the miners that they were venturing into hell.

Above ground, there were other difficulties. How to spot the new stuff? What to make of it? How to distinguish it from all the other stuff? It was a job that drove men spare. In a 1657 Physical Dictionary the entry for Sulphur Philosophorum states simply: ‘God knows what the Chymists mean by it.’

Today we manufacture elements, albeit briefly, in the lab. It’s a tidy process, with a tidy nomenclature. Copernicum, einsteinium berkelium: neologisms as orderly and unevocative as car marques.

The more familiar elements have names that evoke their history. Cobalt, found in
a mineral that used to burn and poison miners, is named for the imps that, according to the 16th-century German Georgius Agricola ‘idle about in the shafts and tunnels and really do nothing, although they pretend to be busy in all kinds of labour’. Nickel is kupfernickel, ‘the devil’s copper’, an ore that looked like valuable copper ore but, once hauled above the ground, appeared to have no value whatsoever.

In this account, technology leads and science follows. If you want to understand what oxygen is, for example, you first have to be able to make it. And Cornelius Drebbel, the maverick Dutch inventor, did make it, in 1620, 150 years before Joseph Priestley got in on the act. Drebbel had no idea what this enchanted stuff was, but he knew it sweetened the air in his submarine, which he demonstrated on the Thames before King James I. Again, if you want a good scientific understanding of alkalis, say, then you need soap, and lye so caustic that when a drunk toppled into a pit of the stuff ‘nothing of him was found but his Linnen Shirt, and the hardest Bones, as I had the Relation from a Credible Person, Professor of that Trade’. (This is Otto Tachenius, writing in 1677. There is lot of this sort of thing. Overwhelming in its detail as it can be, Antimony, Gold, and Jupiter’s Wolf is wickedly entertaining.)

Wothers does not care to hold the reader’s hand. From page 1 he’s getting his hands dirty with minerals and earths, metals and the aforementioned urine (without which the alchemists, wanting chloride, sodium, potassium and ammonia, would have been at a complete loss) and we have to wait till page 83 for a discussion of how the modern conception of elements was arrived at. The periodic table doesn’t arrive till page 201 (and then it’s Mendeleev’s first table, published in 1869). Henri Becquerel discovers radioactivity barely four pages before the end of the book. It’s a surprising strategy, and a successful one. Readers fall under the spell of the possibilities of matter well before they’re asked to wrangle with any of the more highfalutin chemical concepts.

In 1782, Louis-Bernard Guyton de Morveau published his Memoir upon Chemical Denominations, the Necessity of Improving the System, and the Rules for Attaining a Perfect Language. Countless idiosyncracies survived his reforms. But chemistry did begin to acquire an orderliness that made Mendeleev’s towering work a century later — relating elements to their atomic structure — a deal easier.

This story has an end. Chemistry as a discipline is now complete. All the major problems have been solved. There are no more great discoveries to be made. Every chemical reaction we do is another example of one we’ve already done. These days, chemists are technologists: they study spectrographs, and argue with astronomers about the composition of the atmospheres around planets orbiting distant stars; they tinker in biophysics labs, and have things to say about protein synthesis. The heroic era of chemical discovery — in which we may fondly recall Gottfried Leibniz extracting phosphorus from 13,140 litres of soldiers’ urine — is past. Only some evocative words remain; and Wothers unpacks them with infectious enthusiasm, and something which in certain lights looks very like love.

Russian enlightenment

Attending Russia’s top non-fiction awards for the TLS, 11 December 2019

Founded in 2008, the Enlightener awards are modest by Western standards. The Russian prize is awarded to writers of non-fiction, and each winner receives seven million rubles – just over £8,500. This year’s ceremony took place last month at Moscow’s School Of Modern Drama, and its winners included Pyotr Talantov for his book exploring the distinction between modern medicine and its magical antecedents, and Elena Osokina for a work about the state stores that sold food and goods at inflated prices in exchange for foreign currency, gold, silver and diamonds. But the organizer’s efforts also extend to domestic and foreign lecture programmes, festivals and competitions. And at this year’s ceremony a crew from TV Rain (or Dozhd, an independent channel) was present, as journalists and critics mingled with researchers in medicine and physics, who had come to show support for the Zimin Foundation which is behind the prizes.

The Zimin Foundation is one of those young–old organizations whose complex origin story reflects the Russian state’s relationship with its intelligentsia. It sprang up to replace the celebrated and influential Dynasty Foundation, whose work was stymied by legal controversy in 2015. Dynasty had been paying stipends to young biologists, physicists and mathematicians: sums just enough that jobbing scientists could afford Moscow rents. The scale of the effort grabbed headlines. Its plan for 2015 – the year it fell foul of the Russian government – was going to cost it 435 million rubles: around £5.5 million.

The Foundation’s money came from Dimitry Zimin’s sale, in 2001, of his controlling stake in VimpelCom, Russia’s second-largest telecoms company.  Raised on non-fiction and popular science, Zimin (pictured) decided to use the money to support young researchers. (“It would be misleading to claim that I’m driven by some noble desire to educate humankind”, he remarked in a 2013 interview. “It’s just that I find it exciting.”)

As a child, Zimin had sought escape in the Utopian promises of science. And no wonder: when he was two, his father was killed in a prison camp near Novosibirsk. A paternal uncle was shot three years later, in 1938. He remembers his mother arguing for days with neighbours in their communal apartment about who was going to wash the floors, or where to store luggage. It was so crowded that when his mother remarried, Dmitry barely noticed. In 1947, Eric Ashby, the Australian Scientific Attaché to the USSR, claimed “it can be said without fear of contradiction that nowhere else in the world, not even in America, is there such a widespread interest in science among the common people”. “Science is kept before the people through newspapers, books, lectures, films, exhibitions in parks and museums, and through frequent public festivals in honour of scientists and their discoveries. There is even an annual ‘olympiad’ of physics for Moscow schoolchildren.” Dimitry Zimin was firmly of this generation.

Then there were books, the “Scientific Imaginative Literature” whose authors had a section all of their own at the Praesidium of the Union of Soviet Writers. Romances about radio. Thrillers about industrial espionage. Stirring adventure stories about hydrographic survey missions to the arctic. The best of these science writers won lasting reputations in the West. In 1921 Alexander Oparin had the bold new idea that life resulted from non-living processes; The Origin of Life came out in English translation in New York in 1938. Alexander Luria’s classic neuropsychological case study The Mind of a Mnemonist described the strange world of a client of his, Solomon Shereshevsky, a man with a memory so prodigious it ruined his life. An English translation first appeared in 1960 and is still in print.

By 2013 Zimin, at the age of eighty, was established as one of the world’s foremost philanthropists, a Carnegie Trust medalist like Rockefeller and the Gateses, George Soros and Michael Bloomberg. But that is a problem in a country where the leaders fear successful businesspeople. In May 2015, just two months after Russia’s minister of education and science, Dmitry Livanov, presented Zimin with a state award for services to science, the Dynasty Foundation was declared a “foreign agent”. “So-called foreign funds work in schools, networks move about schools in Russia for many years under the cover of supporting talented youth”, complained Vladimir Putin, in a speech in June 2015. “Actually they are just sucking them up like a vacuum cleaner.” Never mind that Dynasty’s whole point was to encourage homegrown talent to return. (According to the Association of Russian-Speaking Scientists, around 100,000 Russian-speaking researchers work outside the country.)

Dynasty was required to put a label on their publications and other materials to the effect that they received foreign funding. To lie, in other words. “Certainly, I will not spend my own money acting under the trademark of some unknown foreign state”, Zimin told the news agency Interfax on May 26. “I will stop funding Dynasty.” But instead of stopping his funding altogether, Zimin founded a new foundation, which took over Dynasty’s programmes, including the Enlighteners. Constituted to operate internationally, it is a different sort of beast. It does not limit itself to Russia. And on the Monday following this year’s Enlightener awards it announced a plan to establish new university laboratories around the world. The foundation already has scientific projects up and running in New York, Tel Aviv and Cyprus, and cultural projects at Tartu University in Estonia and in London, where it supports Polity Press’s Russian translation programme.

In Russia, meanwhile, history continues to repeat itself.  In July 2019 the Science and Education Ministry sent a list of what it later called “recommendations” to the institutions it controls. The ministry should be notified in detail of any planned meetings with foreigners and provide the names. At least two Russian researchers must be present at any meeting with foreigners. Contact with foreigners outside work hours is only allowed with a supervisor’s permission. Details of any after-hours contact must be summarized, along with copies of the participants’ passports. This doesn’t just echo the Soviet limits on international communication. It copies them, point by point.

In Soviet times, of course, many scientists and engineers lived in golden cages, enjoying unprecedented social status. But with the Soviet collapse in 1991 came a readjustment in political values that handed the industrial sector to speculators, while leaving experts and technicians without tenure, without prospects; above all, without salaries.

The wheel will keep turning, of course. In 2018 Putin promised that science and innovation were now his top priorities. And things are improving: research and development now receives 1 per cent of the country’s GDP. But Russia has a long way to go to recover its scientific standing, and science does poorly in a politically isolated country. The Enlighteners – Russia’s only major award for non-fiction – are as much an attempt to create a civic space for science as they are a celebration of a genre that has powered Russian dreaming for over a hundred years.

Breakfast with Ryoji Ikeda

Meeting the artist Ryoji Ikeda for the Financial Times, 29 November 2019

At breakfast in a Paris café, the artist and composer Ryoji Ikeda looks ageless in a soft black cap and impenetrably dark glasses, dressed all in black so as to resemble the avatar from an indie video game.

His work too is severe, the spectrum reduced to grayscale, light to pixels, sound to spikes. Yet Ikeda is no minimalist: he is interested in the complexity that explodes the moment you reduce things to their underlying mathematics.

An artist in light, video, sound and haptics (his works often tremble beneath your feet), Ikeda is out to make you dizzy, to overload your senses, to convey, in the most visceral manner (through beats, high volumes, bright lights and image-blizzards) the blooming, buzzing confusion of the world. “I like playing around with the thresholds of perception,” he says. “If it’s too safe, it’s boring. But you have to know what you’re doing. You can hurt people.”

Ikeda’s stringent approach to his work began in the deafening underground clubs of Kyoto. There, in the mid-1990s, he made throbbing sonic experiences with Dumb Type, a coalition of technologically adept experimental artists. And he can still be this immediate when he wants to be: visitors to the main pavilion at this year’s Venice Biennale found themselves squeezed through “Spectra III” (first assembled in 2008), a white corridor so evenly and brightly lit your eyes rejected what they saw, leaving you groping your way out as if in total darkness.

These days, though, he is better known for installations that go straight for the cerebral and mathematical. His ongoing “data-verse” project consists of three massively complex computer animations. The first part, “data-verse 1”, is based on static data from CERN, Nasa, the Human Genome Project and other open sources. “data-verse” contains animations, tables, graphs, matrices, 3D models, Lidar projections, maps. But what is being depicted here: something very small, or very big? There’s no way to tell. The data have peeled away from the things they represent and are dancing their own pixelated dance. Numbers have become rivers. At last the viewer’s mind surrenders to the flow and rhythm of this frenetic 12-minute piece.

It would be polite to say that “data-verse” is beautiful — but it isn’t. Rather, it is sublime, evoking a world stripped back to its mathematical bones. “If it’s beautiful, you can handle it; the sublime, you cannot,” Ikeda says. “If you stand in some great whited-out landscape in Lapland, the Sahara or the Alps, you feel something like fear. You’re trying to draw inform­ation from the world, but it’s something that your brain cannot handle.”

Similarly, the symmetrical, self-similar “data-verse” is an artwork that your mind struggles to navigate, tugging at every locked door in an attempt to regain purchase on the world.

“You try to understand, but you give up — and then it’s nice. Because now you are experiencing this piece the same way you listen to music,” Ikeda says. “It’s simply a manipulation of numbers and relationships, like a musical composition. It’s very different from the sort of visual art where you’re looking through the surface of the painting or the sculpture to see what it represents.”

When we meet, Ikeda is on his way to Tokyo Midtown, and the unveiling of “data-verse 2” (this one based on dynamic data “like the weather, or stock exchanges”). The venue is Beyond Watchmaking, an exhibition arranged by his patron, the eccentric family-run Swiss watchmaker Audemars Piguet. The third part of data-verse is due to be unveiled next year.

It is a vastly ambitious project but Ikeda has always tended towards the expansive. He pulls out of his suitcase an enormously heavy encyclopedia of sonic visualisations. “I wanted you to see this,” he says with a touching pride, leafing through page after page of meticulously documented oscilloscoped forms. Encyclopedia Cyclo.id was compiled with his friend Carsten Nicolai, the German multimedia artist, in 1999. Each figure here represents a particular sound. The more complex figures resemble watch faces. “It’s for designers, really,” Ikeda shrugs, shutting the book, “and architects.”

And the point of this? That lawful, timeless mathematics underpins the world and all our activities within it.

Ikeda spends 10 months out of every 12 travelling: “I really work in the airport or the kitchen. I don’t like the studio.” Months spent working out problems on paper and in his head are interspersed with intense, collaborative “cooking sessions” with a coterie of exceptional coders — creative sessions in which all previous assumptions are there to be challenged.

However, “data-verse” is likely to be Ikeda’s last intensely technological artwork. At the moment he is inclining more towards music and has been arranging some late compositions by John Cage in a purely acoustic project. As comfortable as he is around microphones, amps and computers, Ikeda isn’t particularly affiliated to machines.

“For a long time, I was put in the media-art category,” he says, “and I was so uncomfortable, because so much of that work is toylike, no depth to it at all. I’m absolutely not like this.”

Ikeda’s art, built not from things but from quantities and patterns, has afforded him much freedom. But he is acutely aware that others have more freedom still: “Mathematicians,” he sighs, “they don’t care about a thing. They don’t even care about time. It’s very interesting.”

668 televisions (some of them broken)

Visiting the Nam June Paik exhibition at Tate Modern for New Scientist, 27 November 2019

A short drive out of Washington DC, in an anonymous industrial unit, there is an enormous storage space crammed to the brim with broken television sets, and rolling stack shelving piled with typewriters, sewing machines and crudely carved coyotes.

This is the archive of the estate of Nam June Paik, the man who predicted the internet, the Web, YouTube, MOOCs, and most other icons of the current information age; an artist who spent much of his time engineering, dismantling, reusing, swapping out components, replacing old technology with better technology, delivering what he could of his vision with the components available to him. Cathode ray tube televisions. Neon. Copper. FORTRAN punch cards. And a video synthesizer, designed with the Tokyo artist-engineer Shuya Abe in 1969. The signature psychedelic video effects of Top of the Pops and MTV began life here.

Paik was born in Seoul in 1932, during the Japanese occupation of Korea, and educated in Germany, where he met the composers Karl-Heinz Stockhausen and John Cage. A fascinating retrospective show currently at London’s Tate Modern celebrates his involvement with that loose confederacy of artist-anarchists known as Fluxus. (Yoko Ono was a patron. David Bowie and Laurie Anderson were hangers-on.)

Beneath Paik’s celebrated, and celebrity-stuffed concerts, openings and “happenings” — there’s what amounts — in the absence of Paik’s controlling intelligence (he died in 2006) — to a pile of junk. 668 televisions, some of them broken. A black box the size of a double refrigerator, containing the hardware to drive one of Paik’s massive “matrices”, Megatron/Matrix, an eight-channel, 215-screen video wall, in pieces now, a nightmare to catalogue, never mind reconstruct, stored in innumerable tea chests.

The trick for Saisha Grayson, the Smithsonian American Art Museum’s curator of time-based media, and Lynn Putney its associate registrar, is to distinguish the raw material of Paik’s work from the work itself. Then curators like Tate Modern’s Sook Kyung Lee must interpret that work for a new generation, using new technology. Because let’s face it: in the end, more or less everything Paik used to make his art will end up in the bin. Consumer electronics aren’t like a painter’s pigments, which can be analysed and copied, or like a sculptor’s marble, which can, at a pinch, be repaired.

“Through Paik’s estate we are getting advice and guidance about what the artist really intended to achieve,” Lee explains, “and then we are simulating those things with new technology.”

Paik’s video walls — the works by which he’s best remembered, are monstrously heavy and absurdly delicate. But the Tate has been able to recreate Paik’s Sistine Chapel for this show. Video projectors to fill a room with a blizzard of cultural and pop-cultural imagery from around the world — a visual melting pot reflective of Paik’s vision of a technological utopia, in which “telecommunication will become our springboard for new and surprising human endeavors.” The projectors are new but the feel of this recreated piece is not so very different to the 1994 original.

To stand here, bombarded by Bowie and Nixon and Mongolian throat singers and all the other flitting, flickering icons of Paik’s madcap future, is to remember all our hopes for the information age: “Video-telephones, fax machines, interactive two-way television… and many other variations of this kind of technology are going to turn the television set into an «expanded-media» telephone system with thousands of novel uses,” Paik enthused in 1974, “not only to serve our daily needs, but to enrich the quality of life itself.”

Worth losing sleep over

Watching Human Nature, directed by Adam Bolt, for New Scientist, 27 November 2019.

Mature and intelligent, Human Nature shows us how gene editing works, explores its implications and – in a field awash with alarmist rhetoric and cheap dystopianism – explains which concerns are worth losing sleep over.

This gripping documentary covers a lot of ground, but also works as a primer on CRISPR, the spectacular technology that enables us to cut and paste genetic information with something like the ease with which we manipulate text on a computer. Human Nature introduces us to key start-ups and projects that promise to predict, correct and maybe enhance the genetic destinies of individuals. It explores the fears this inspires, and asks whether they are reasonable. Its conclusions are cautious, well-argued and largely optimistic.

Writers Regina Sobel and Adam Bolt (who also directs) manage to tell this story through interviews. Key players in the field, put at their ease during hours of film-making, speak cogently to camera. There is no narration.

Ned Piyadarakorn’s graphics are ravishing and yet absurdly simple to grasp. They need to be, because this is an account hardly less complex than those in the best popular science books. As the film progressed, I began to suspect that the film-makers assume we aren’t idiots. This is so rare an experience that it took a while to sink in.

There are certain problems the film can’t get round, though. There are too many people in white coats moving specks from one Petri dish to another. It couldn’t be otherwise, given the technology involves coats, specks, Petri dishes and little else by way of props the general viewer can understand. That this is a source of cool amusement rather than irritation is largely due to the charisma of the film’s cast of researchers, ethicists, entrepreneurs, diagnosticians, their clients and people with conditions that could be helped by the technique, such as schoolboy David Sanchez, who has sickle-cell anaemia. We learn that researchers are running clinical trials using CRISPR to test a therapy for his condition.

Foundational researchers like Jennifer Doudna and Jill Banfield, Emmanuelle Charpentier and Fyodor Urnov provide star quality. Provocateurs like Stephen Hsu, a cheerful promoter of designer babies, and the longevity guru George Church are given room to explain why they aren’t nearly as crazy as some people assume.

Then the bioethicist Alta Charo makes the obvious but frequently ignored point that the Brave New World nightmare CRISPR is said to usher in is a very old and well-worn future indeed. Sterilisations, genocide and mass enslavement have been around a lot longer than CRISPR, she says, and if the new tech is politically abused, we will only have our ourselves to blame.

There is, of course, the possibility that CRISPR will let loose some irresistibly bad ideas. Consider the mutation in a gene called ADRB1, which allows us to get by on just 4 hours’ sleep a night. I would leap at the chance of a therapy that freed up my nights – but I wonder what would happen if everyone else followed suit. Would we all live richer, more fulfilled lives? Or would I need a letter from my doctor when I applied for a 16-hour factory shift?

The point, as Human Nature makes all too clear, is that the questions we should be asking about gene editing are only superficially about the technology. At heart, they are questions about ourselves and our values.

Visit a hydrogen utopia

On Tuesday 3 December at 7pm I’ll be chairing a discussion at London’s Delfina Foundation about energy utopias, and the potential of hydrogen as a locally-produced sustainable energy source. Speakers include the artist Nick Laessing, Rokiah Yaman (Project Manager, LEAP closed-loop technologies) and Dr Chiara Ambrosio (History and Philosophy of Science, UCL).There may also be food, assuming Nick’s hydrogen stove behaves itself.  More details here.

Now we use guns

Talking to Daniel Abraham and Ty Franck (better known as the sci-fi writer James S. A. Corey) for New Scientist, 20 November 2019

Daniel Abraham and Ty Franck began collaborating on their epic, violent, yet uncommonly humane space opera The Expanse in 2011 with the book Leviathan Wakes. The series of novels pits the all-too-human crew of an ice-hauler from Ceres against the studied realpolitik of a far-from-peaceful solar system. The ninth and final book is due out next year. Meanwhile, the TV series enters its fourth season, available on Amazon Prime from 13 December.

The Expanse began as a game, became a series of novels and ended up on television. Was it intended as a multimedia project?

Ty Franck Initially it was just a video game that didn’t work, then it evolved into a tabletop role-playing game.

Daniel Abraham And then books, and then a TV show. I think intention is a very bold word to use for any of this. It implies a certain level of cunning that I don’t think we actually have.

What inspired its complex plot?

TF I’m a big fan of pre-classical history. I pull a lot of weird Babylonian and Persian and Assyrian history into the mix. It’s funny how often people accuse you of critiquing current events. They’re like, ‘You are commenting on this elected politician!’ And I’m like, ‘No, that character is Nebuchadnezzar’.

How have the humans changed in your future? Or is their lack of change the point?

DA If you really want a post-human future, change humans so that they don’t use wealth to measure status. But then they wouldn’t be human any more. We are mean-spirited little monkeys, capable of moments of great grace and kindness, and that story is much more plausible to me and much more beautiful than any post-human tale.

TF I find that the books that I remember the longest, and the books that I’ve been most entertained by, are the ones where the characters are the most human, not the least human.

You’ve mentioned Alfred Bester’s 1959 novel The Stars My Destination as an influence…

TF Exactly, and there you have an anti-hero called Gully Foyle. Gully is everything that we fear to be true about ourselves. He’s venal, and weak, and cowardly, and stupid, and mean. Watching him survive and become something more is the reason we’re still talking about that book today.

You began The Expanse nine years ago. What would you have done differently knowing what we know now about the solar system?

DA We would have made Ceres less rocky. We imagined a mostly mineral dwarf planet, and then it turned out there’s a bunch of ice on it. But this sort of thing is inevitable. You start off as accurate as possible, and a few years later you sound like Jules Verne. That the effort to get things right is doomed doesn’t take away from its essential dignity.

Other things have happened, too. Deepfake technology was still very speculative when we started writing this, and now it’s ubiquitous. One of our plot points in Book Three looks pretty straightforward now.

I don’t see many robots

DA We’re in real danger of miseducating people about the nature of artificial intelligence. Sci-fi tells two stories about AI: we made it and it wants to kill us, or we made it and we want it to love us. But AI is neither of those things.

TF What people mean is: where are the computers that talk and act like people? Robots are everywhere in The Expanse. But when you build a machine to do a job, you build it in a form that most efficiently does that job, and make it smart enough to do that job.

Is your future dystopian?

DA When Season One of the TV version came out in the US, we were considered very dystopian. Then the 2016 election brought Donald Trump to power, and suddenly we were this uplifting and hopeful show. Of course we’re neither. The argument the show makes is that humans are humans. We bumble through the future the way we bumbled through the past. What changes is technique: what we learn to do, and what we learn to make.

TF We don’t murder each other in a jealous rage with pointy sticks any more. Now we use guns. But the jealous rage and the urge to murder hasn’t gone away.

DA What we’ve managed to do is expand what it means to be a tribe. From a small group of people who are actually physically together…

TF …and mostly genetically related …

DA …we’ve expanded to nation states and belief systems and…

TF …fans of a particular TV show.

DA The great success of humanity so far isn’t in abolishing tribalism, because we didn’t. It’s in broadening the size of the tribe over and over. Of course, there’s still work to be done there.

Tyrants and geometers

Reading Proof!: How the World Became Geometrical by Amir Alexander (Scientific American) for the Telegraph, 7 November 2019

The fall from grace of Nicolas Fouquet, Louis XIV’s superintendant of finances, was spectacular and swift. In 1661 he held a fete to welcome the king to his gardens at Vaux-le-Vicomte. The affair was meant to flatter, but its sumptuousness only served to convince the absolutist monarch that Fouquet was angling for power. “On 17 August, at six in the evening Fouquet was the King of France,” Voltaire observed; “at two in the morning he was nobody.”

Soon afterwards, Fouquet’s gardens were grubbed up in an act, not of vandalism, but of expropriation: “The king’s men carefully packed the objects into crates and hauled them away to a marshy town where Louis was intent on building his own dream palace,” the Israeli-born US historian Amir Alexander tells us. “It was called Versailles.”

Proof! explains how French formal gardens reflected, maintained and even disseminated the political ideologies of French monarchs. from “the Affable” Charles VIII in the 15th century to poor doomed Louis XVI, destined for the guillotine in 1793. Alexander claims these gardens were the concrete and eloquent expression of the idea that “geometry was everywhere and structured everything — from physical nature to human society, the state, and the world.”

If you think geometrical figures are abstract artefacts of the human mind, think again. Their regularities turn up in the natural world time and again, leading classical thinkers to hope that “underlying the boisterous chaos and variety that we see around us there may yet be a rational order, which humans can comprehend and even imitate.”

It is hard for us now to read celebrations of nature into the rigid designs of 16th century Fontainebleau or the Tuileries, but we have no problem reading them as expressions of political power. Geometers are a tyrant’s natural darlings. Euclid spent many a happy year in Ptolemaic Egypt. King Hiero II of Syracuse looked out for Archimedes. Geometers were ideologically useful figures, since the truths they uncovered were static and hierarchical. In the Republic, Plato extols the virtues of geometry and advocates for rigid class politics in practically the same breath.

It is not entirely clear, however, how effective these patterns actually were as political symbols. Even as Thomas Hobbes was modishly emulating the logical structure of Euclid’s (geometrical) Elements in the composition of his (political) Leviathan (demonstrating, from first principles, the need for monarchy), the Duc de Saint-Simon, a courtier and diarist, was having a thoroughly miserable time of it in the gardens of Louis XIV’s Versailles: “the violence everywhere done to nature repels and wearies us despite ourselves,” he wrote in his diary.

So not everyone was convinced that Versailles, and gardens of that ilk, revealed the inner secrets of nature.

Of the strictures of classical architecture and design, Alexander comments that today, “these prescriptions seem entirely arbitrary”. I’m not sure that’s right. Classical art and architecture is beautiful, not merely for its antiquity, but for the provoking way it toys with the mechanics of visual perception. The golden mean isn’t “arbitrary”.

It was fetishized, though: Alexander’s dead right about that. For centuries, Versailles was the ideal to which Europe’s grand urban projects aspired, and colonial new-builds could and did out-do Versailles, at least in scale. Of the work of Lutyens and Baker in their plans for the creation of New Delhi, Alexander writes: “The rigid triangles, hexagons, and octagons created a fixed, unalterable and permanent order that could not be tampered with.”

He’s setting colonialist Europe up for a fall: that much is obvious. Even as New Delhi and Saigon’s Boulevard Norodom and all the rest were being erected, back in Europe mathematicians Janos Bolyai, Carl Friedrich Gauss and Bernhard Riemann were uncovering new kinds of geometry to describe any curved surface, and higher dimensions of any order. Suddenly the rigid, hierarchical order of the Euclidean universe was just one system among many, and Versailles and its forerunners went from being diagrams of cosmic order to being grand days out with the kids.

Well, Alexander needs an ending, and this is as good a place as any to conclude his entertaining, enlightening, and admirably well-focused introduction to a field of study that, quite frankly, is more rabbit-hole than grass.

I was in Washington the other day, sweating my way up to the Lincoln Memorial. From the top I measured the distance, past the needle of the Washington Monument, to Capitol Hill. Major Pierre Charles L’Enfant built all this: it’s a quintessential product of the Versailles tradition. Alexander calls it “nothing less than the Constitutional power structure of the United States set in stone, pavement, trees, and shrubs.”

For nigh-on 250 years tourists have been slogging from one end of the National Mall to the other, re-enacting the passion of the poor Duc de Saint-Simon in Versailles, who complained that “you are introduced to the freshness of the shade only by a vast torrid zone, at the end of which there is nothing for you but to mount or descend.”

Not any more, though. Skipping down the steps, I boarded a bright red electric Uber scooter and sailed electrically east toward Capitol Hill. The whole dignity-dissolving charade was made possible (and cheap) by map-making algorithms performing geometrical calculations that Euclid himself would have recognised. Because the ancient geometer’s influence on our streets and buildings hasn’t really vanished. It’s been virtualised. Algorithmized. Turned into a utility.

Now geometry’s back where it started: just one more invisible natural good.

Fatally punctured by a sword-swallower’s blade

Visiting Flop: 13 stories of failure at The Octagon, University College London, for New Scientist, 6 November 2019

Quitting your job? Then remember to clear out your locker. One former employee of University College London left a bottle of home-made plum brandy in a drawer. The macerated plum was eventually discovered, mulled over (sorry), misidentified as a testicle (species unknown), and added to the university’s collection. Now that same collection fuels Flop, in UCL’s tiny Octagon gallery.

It’s not so much an exhibition as a series of provocations. (A notice by the last case asks you to share your own accounts of failure on a postcard “so we can all start learning from each other’s mistakes.”) After all, what is a failure? Do failures exist outside of the realm of human judgement? (“Can animals have accidents?” is a favourite undergraduate philosophy question. Humans can: one of the more gruesome exhibits here is a human heart, fatally punctured by a sword-swallower’s blade.)

How we define failure depends on our changing needs and circumstances. There was a time, not very long ago, when the plethora of human languages seemed indicative of some deep, Biblical failure to establish amity across our species. Concerted efforts were made to establish a single, synthetic language through which we might all be understood. There’s a fascinating page here from an essay by John Wilkins, whose Royal Society language project attempted to establish an analytical language that would allow people to communicate despite not sharing the same tongue. It foundered because the Royal Society couldn’t agree on how many essential concepts existed in the world.

Now that we live among artificially intelligent agents, the best of whom are more than capable of translating even spoken speech in real time, we find failure in our reduction of linguistic diversity. We bemoan the loss of languages (3000 of them have perished since 1910) , and mourn the cultural deficit left by their demise.

Can objects fail? Only in the sense that they fail to perform an expected action. Silly Putty, a perenially popular toy, was the result of a failed attempt to produce a synthetic rubber substitute during World War II. People can “fail” in much the same way. Percy Wyndham Lewis was kicked out of the Slade School of Fine Art for arguing with his lecturers, and went on to become the foremost avant-garde artist and writer of his generation.

If these examples of failure feel a bit tenuous, well, that’s really the point Flop wants to make: what’s interesting is how we deal with failures, not how we define them.
“Perhaps contrasting failure with success is the real problem,” the introductory material explains. “If every activity has to end in either one or the other, it denies the nuanced and messy complexities of life.”

Pig-philosophy

Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.