Pluck

Reading Gunpowder and Glory: The Explosive Life of Frank Brock OBE by Harry Smee and Henry Macrory for the Spectator, 21 March 2020

Early one morning in October 1874, a barge carrying three barrels of benzoline and five tons of gunpowder blew up in the Regent’s Canal, close to London Zoo. The crew of three were killed outright, scores of houses were badly damaged, the explosion could be heard 25 miles away, and “dead fish rained from the sky in the West End.”

This is a book about the weird, if obvious, intersection between firework manufacture and warfare. It is, ostensibly, the biography of a hero of the First World War, Frank Brock. And if it were the work of more ambitious literary hands, Brock would have been all you got. His heritage, his school adventures, his international career as a showman, his inventions, his war work, his violent death. Enough for a whole book, surely?

But Gunpowder and Glory is not a “literary” work, by which I mean it is neither self-conscious nor overwrought. Instead Henry Macrory (who anyway has already proved his literary chops with his 2018 biography of the swindler Whitaker Wright) has opted for what looks like a very light touch here, assembling and ordering the anecdotes and reflections of Frank Brock’s grandson Harry Smee about his family, their business as pyrotechnical artists, and, finally, about Frank, his illustrious forebear.

I suspect a lot of sweat went into such artlessness, and it’s paid off, creating a book that reads like fascinating dinner conversation. Reading its best passages, I felt I was discovering Brock the way Harry had as a child, looking into his mother’s “ancient oak chests filled with papers, medals, newspapers, books, photographs, an Intelligence-issue knuckleduster and pieces of Zeppelin and Zeppelin bomb shrapnel.”

For eight generations, the Brock family produced pyrotechnic spectaculars of a unique kind. Typical set piece displays in the eighteenth century included “Jupiter discharging lightning and thunder, Two gladiators combating with fire and sword, and Neptune finely carv’d seated in his chair, drawn by two sea horses on fire-wheels, spearing a dolphin.”

Come the twentieth century, Brock’s shows were a signature of Empire. It would take a write like Thomas Pynchon to do full justice to “a sixty foot-high mechanical depiction of the Victorian music-hall performer, Lottie Collins, singing the chorus of her famous song ‘Ta-ra-ra-boom-de-ay’ and giving a spirited kick of an automated leg each time the word ‘boom’ rang out.”

Frank was a Dulwich College boy, and one of that generation lost to the slaughter of the Great War. A spy and an inventor — James Bond and Q in one — he applied his inherited chemical and pyrotechnical genius to the war effort — by making a chemical weapon. It wasn’t any good, though: Jellite, developed during the summer of 1915 and named after its jelly-like consistency during manufacture, proved insufficiently lethal.

On such turns of chance do reputations depend, since we remember Frank Brock for his many less problematic inventions. Dover flares burned for seven and a half minutes
and lit up an area of three miles radius, as Winston Churchill put it, “as bright as Piccadilly”. U boats, diving to avoid these lights, encountered mines. Frank’s artificial fogs, hardly bettered since, concealed whole British fleets, entire Allied battle lines.

Then there are his incendiary bullets.

At the time of the Great War a decent Zeppelin could climb to 20,000 feet, travel at 47 mph for more than 1,000 miles, and stay aloft for 36 hours. Smee and Mcrory are well within their rights to call them “the stealth bombers of their time”.

Brock’s bullets tore them out of the sky. Sir William Pope, Brock’s advisor, and a professor of chemistry at Cambridge University, explained: “You need to imagine a bullet proceeding at several thousand feet a second, and firing as it passes through a piece of fabric which is no thicker than a pocket handkerchief.” All to rupture a gigantic sac of hydrogen sufficiently to make the gas explode. (Much less easy than you think; the Hindenburg only crashed because its entire outer envelope was set on fire.)

Frank died in an assault on the mole at Zeebrugge in 1918. He shouldn’t have been there. He should have been in a lab somewhere, cooking up another bullet, another light, another poison gas. Today, he surely would be suitably contained, his efforts efficiently channeled, his spirit carefully and surgically broken.

Frank lived at a time when it was possible — and men, at any rate, were encouraged — to be more than one thing. That this heroic idea overreached itself — that rugby field and school chemistry lab both dissolved seamlessly into the Somme — needs no rehearsing.

Still, we have lost something. When Frank went to school there was a bookstall near the station which sold “a magazine called Pluck, containing ‘the daring deeds of plucky sailors, plucky soldiers, plucky firemen, plucky explorers, plucky detectives, plucky railwaymen, plucky boys and plucky girls and all sorts of conditions of British heroes’.”

Frank was a boy moulded thus, and sneer as much as you want, we will not see his like again.

 

“So that’s how the negroes of Georgia live!”

Visiting W.E.B. Du Bois: Charting Black Lives, at the House of Illustration, London, for the Spectator, 25 January 2020

William Edward Burghardt Du Bois was born in Massachusetts in 1868, three years after the official end of slavery in the United States. He grew up among a small, tenacious business- and property-owning black middle class who had their own newspapers, their own schools and universities, their own elected officials.

After graduating with a PhD in history from Harvard University, Du Bois embarked on a sprawling study of African Americans living in Philadelphia. At the historically black Atlanta University in 1897, he established international credentials as a pioneer of the newfangled science of sociology. His students were decades ahead of their counterparts in the Chicago school.

In the spring of 1899, Du Bois’s son Burghardt died, succumbing to sewage pollution in the Atlanta water supply. ‘The child’s death tore our lives in two,’ Du Bois later wrote. His response: ‘I threw myself more completely into my work.’

A former pupil, the black lawyer Thomas Junius Calloway, thought that Du Bois was just the man to help him mount an exhibition to demonstrate the progress that had been made by African Americans. Funded by Congress and planned for the Paris Exposition of 1900, the project employed around a dozen clerks, students and former students to assemble and run ‘the great machinery of a special census’.

Two studies emerged. ‘The Georgia Negro’, comprising 32 handmade graphs and charts, captured a living community in numbers: how many black children were enrolled in public schools, how far family budgets extended, what people did for work, even the value of people’s kitchen furniture.

The other, a set of about 30 statistical graphics, was made by students at Atlanta University and considered the African American population of the whole of the United States. Du Bois was struck by the fact that the illiteracy of African Americans was ‘less than that of Russia, and only equal to that of Hungary’. A chart called ‘Conjugal Condition’ suggests that black Americans were more likely to be married than Germans.

The Exposition Universelle of 1900 brought all the world to the banks of the Seine. Assorted Africans, shipped over for the occasion, found themselves in model native villages performing bemused and largely made-up rituals for the visitors. (Some were given a truly lousy time by their bosses; others lived for the nightlife.) Meanwhile, in a theatre made of plaster and drapes, the Japanese geisha Sada Yacco, wise to this crowd from her recent US tour, staged a theatrical suicide for herself every couple of hours.

The expo also afforded visitors more serious windows on the world. Du Bois scraped together enough money to travel steerage to Paris to oversee his exhibition’s installation at the Palace of Social Economy.

He wasn’t overly impressed by the competition. ‘There is little here of the “science of society”,’ he remarked, and the organisers of the Exposition may well have agreed with him: they awarded him a gold medal for what Du Bois called, with justifiable pride, ‘an honest, straightforward exhibit of a small nation of people, picturing their life and development without apology or gloss, and above all made by themselves’.

At the House of Illustration in London you too can now follow the lines, bars and spirals that reveal how black wealth, literacy and land ownership expanded over the four decades since emancipation.

His exhibition also included what he called ‘the usual paraphernalia for catching the eye — photographs, models, industrial work, and pictures’, so why did Du Bois include so many charts, maps and diagrams?

The point about data is that it looks impersonal. It is a way of separating your argument from what people think of you, and this makes it a powerful weapon in the hands of those who find themselves mistrusted in politics and wider society. Du Bois and his community, let’s not forget, were besieged — by economic hardship, and especially by the Jim Crow laws that would outlive him by two years (he died in 1963).

Du Bois pioneered sociology, not statistics. Means of visualising data had entered academia more than a century before, through the biographical experiments of Joseph Priestly. His timeline charts of people’s lives and relative lifespans had proved popular, inspiring William Playfair’s invention of the bar chart. Playfair, an engineer and political economist, published his Commercial and Political Atlas in London in 1786. It was the first major work to contain statistical graphs. More to the point, it was the first time anyone had tried to visualise an entire nation’s economy.

Statistics and their graphic representation were quickly established as an essential, if specialised, component of modern government. There was no going back. Metrics are a self-fertilising phenomenon. Arguments over figures, and over the meaning of figures, can only generate more figures. The French civil engineer Charles Joseph Minard used charts in the 1840s to work out how to monetise freight on the newfangled railroads, then, in retirement, and for a hobby, used two colours and six dimensions of data to visualise Napoleon’s invasion and retreat during the 1812 campaign of Russia.

And where society leads, science follows. John Snow founded modern epidemiology when his annotated map revealed the source of an outbreak of cholera in London’s Soho. English nurse Florence Nightingale used information graphics to persuade Queen Victoria to improve conditions in military hospitals.

Rightly, we care about how accurate or misleading infographics can be. But let’s not forget that they should be beautiful. The whole point of an infographic is, after all, to capture attention. Last year, the House of Illustration ran a tremendous exhibition of the work of Marie Neurath who, with her husband Otto, dreamt up a way of communicating, without language, by means of a system of universal symbols. ‘Words divide, pictures unite’ was the slogan over the door of their Viennese design institute. The couple’s aspirations were as high-minded as their output was charming. The Neurath stamp can be detected, not just in kids’ picture books, but across our entire designscape.

Infographics are prompts to the imagination. (One imagines at least some of the 50 million visitors to the Paris Expo remarking to each other, ‘So that’s how the negroes of Georgia live!’) They’re full of facts, but do they convey them more effectively than language? I doubt it. Where infographics excel is in eliciting curiosity and wonder. They can, indeed, be downright playful, as when Fritz Kahn, in the 1920s, used fast trains, street traffic, dancing couples and factory floors to describe, by visual analogy, the workings of the human body.

Du Bois’s infographics aren’t rivals to Kahn or the Neuraths. Rendered in ink, gouache watercolour and pencil, they’re closer in spirit to the hand-drawn productions of Minard and Snow. They’re the meticulous, oh-so-objective statements of a proud, decent, politically besieged people. They are eloquent in their plainness, as much as in their ingenuity, and, given a little time and patience, they prove to be quite unbearably moving.

Cutting up the sky

Reading A Scheme of Heaven: Astrology and the Birth of Science by Alexander Boxer
for the Spectator, 18 January 2020

Look up at sky on a clear night. This is not an astrological game. (Indeed, the experiment’s more impressive if you don’t know one zodiacal pattern from another, and rely solely on your wits.) In a matter of seconds, you will find patterns among the stars.

We can pretty much apprehend up to five objects (pennies, points of light, what-have-you) at a single glance. Totting up more than five objects, however, takes work. It means looking for groups, lines, patterns, symmetries, boundaries.

The ancients cut up the sky into figures, all those aeons ago, for the same reason we each cut up the sky within moments of gazing at it: because if we didn’t, we wouldn’t be able to comprehend the sky at all.

Our pattern-finding ability can get out of hand. During his Nobel lecture in 1973 the zoologist Konrad Lorenz recalled how he once :”… mistook a mill for a sternwheel steamer. A vessel was anchored on the banks of the Danube near Budapest. It had a little smoking funnel and at its stern an enormous slowly-turning paddle-wheel.”

Some false patterns persist. Some even flourish. And the brighter and more intellectually ambitious you are, the likelier you are to be suckered. John Dee, Queen Elizabeth’s court philosopher, owned the country’s largest library (it dwarfed any you would find at Oxford or Cambridge). His attempt to tie up all that knowledge in a single divine system drove him into the arms of angels — or at any rate, into the arms of the “scrier” Edward Kelley, whose prodigious output of symbolic tables of course could be read in such a way as to reveal fragments of esoteric wisdom.

This, I suspect, is what most of us think about astrology: that it was a fanciful misconception about the world that flourished in times of widespread superstition and ignorance, and did not, could not, survive advances in mathematics and science.

Alexander Boxer is out to show how wrong that picture is, and A Scheme of Heaven will make you fall in love with astrology, even as it extinguishes any niggling suspicion that it might actually work.

Boxer, a physicist and historian, kindles our admiration for the earliest astronomers. My favourite among his many jaw-dropping stories is the discovery of the precession of the equinoxes. This is the process by which the sun, each mid-spring and mid-autumn, rises at a fractionally different spot in the sky each year. It takes 26,000 years to make a full revolution of the zodiac — a tiny motion first detected by Hipparchus around 130 BC. And of course Hipparchus, to make this observation at all, “had to rely on the accuracy of stargazers who would have seemed ancient even to him.”

In short, a had a library card. And we know that such libraries existed because the “astronomical diaries” from the Assyrian library at Nineveh stretch from 652BC to 61BC, representing possibly the longest continuous research program ever undertaken in human history.

Which makes astrology not too shoddy, in my humble estimation. Boxer goes much further, dubbing it “the ancient world’s most ambitious applied mathematics problem.”

For as long as lives depend on the growth cycles of plants, the stars will, in a very general sense, dictate the destiny of our species. How far can we push this idea before it tips into absurdity? The answer is not immediately obvious, since pretty much any scheme we dream up will fit some conjunction or arrangement of the skies.

As civilisations become richer and more various, the number and variety of historical events increases, as does the chance that some event will coincide with some planetary conjunction. Around the year 1400, the French Catholic cardinal Pierre D’Ailly concluded his astrological history of the world with a warning that the Antichrist could be expected to arrive in the year 1789, which of course turned out to be the year of the French revolution.

But with every spooky correlation comes an even larger horde of absurdities and fatuities. Today, using a machine-learning algorithm, Boxer shows that “it’s possible to devise a model that perfectlly mimics Bitcoin’s price history and that takes, as its input data, nothing more than the zodiac signs of the planets on any given day.”

The Polish science fiction writer Stanislaw Lem explored this territory in his novel The Chain of Chance: “We now live in such a dense world of random chance,” he wrote in 1975, “in a molecular and chaotic gas whose ‘improbabilities’ are amazing only to the individual human atoms.” And this, I suppose, is why astrology eventually abandoned the business of describing whole cultures and nations (a task now handed over to economics, another largely ineffectual big-number narrative) and now, in its twilight, serves merely to gull individuals.

Astrology, to work at all, must assume that human affairs are predestined. It cannot, in the long run, survive the notion of free will. Christianity did for astrology, not because it defeated a superstition, but because it rendered moot astrology’s iron bonds of logic.

“Today,” writes Boxer, “there’s no need to root and rummage for incidental correlations. Modern machine-learning algorithms are correlation monsters. They can make pretty much any signal correlate with any other.”

We are bewitched by big data, and imagine it is something new. We are ever-indulgent towards economists who cannot even spot a global crash. We credulously conform to every algorithmically justified norm. Are we as credulous, then, as those who once took astrological advice as seriously as a medical diagnosis? Oh, for sure.

At least our forebears could say they were having to feel their way in the dark. The statistical tools you need to sort real correlations from pretty patterns weren’t developed until the late nineteenth century. What’s our excuse?

“Those of us who are enthusiastic about the promise of numerical data to unlock the secrets of ourselves and our world,” Boxer writes, “would do well simply to acknowledge that others have come this way before.”

‘God knows what the Chymists mean by it’

Reading Antimony, Gold, and Jupiter’s Wolf: How the Elements Were Named, by
Peter Wothers, for The Spectator, 14 December 2019

Here’s how the element antimony got its name. Once upon a time (according to the 17th-century apothecary Pierre Pomet), a German monk (moine in French) noticed its purgative effects in animals. Fancying himself as a physician, he fed it to “his own Fraternity… but his Experiment succeeded so ill that every one who took of it died. This therefore was the reason of this Mineral being call’d Antimony, as being destructive of the Monks.”

If this sounds far-fetched, the Cambridge chemist Peter Wothers has other stories for you to choose from, each more outlandish than the last. Keep up: we have 93 more elements to get through, and they’re just the ones that occur naturally on Earth. They each have a history, a reputation and in some cases a folklore. To investigate their names is to evoke histories that are only intermittently scientific. A lot of this enchanting, eccentric book is about mining and piss.

The mining:

There was no reliable lighting or ventilation; the mines could collapse at any point and crush the miners; they could be poisoned by invisible vapours or blown up by the ignition of pockets of flammable gas. Add to this the stifling heat and the fact that some of the minerals themselves were poisonous and corrosive, and it really must have seemed to the miners that they were venturing into hell.

Above ground, there were other difficulties. How to spot the new stuff? What to make of it? How to distinguish it from all the other stuff? It was a job that drove men spare. In a 1657 Physical Dictionary the entry for Sulphur Philosophorum states simply: ‘God knows what the Chymists mean by it.’

Today we manufacture elements, albeit briefly, in the lab. It’s a tidy process, with a tidy nomenclature. Copernicum, einsteinium berkelium: neologisms as orderly and unevocative as car marques.

The more familiar elements have names that evoke their history. Cobalt, found in
a mineral that used to burn and poison miners, is named for the imps that, according to the 16th-century German Georgius Agricola ‘idle about in the shafts and tunnels and really do nothing, although they pretend to be busy in all kinds of labour’. Nickel is kupfernickel, ‘the devil’s copper’, an ore that looked like valuable copper ore but, once hauled above the ground, appeared to have no value whatsoever.

In this account, technology leads and science follows. If you want to understand what oxygen is, for example, you first have to be able to make it. And Cornelius Drebbel, the maverick Dutch inventor, did make it, in 1620, 150 years before Joseph Priestley got in on the act. Drebbel had no idea what this enchanted stuff was, but he knew it sweetened the air in his submarine, which he demonstrated on the Thames before King James I. Again, if you want a good scientific understanding of alkalis, say, then you need soap, and lye so caustic that when a drunk toppled into a pit of the stuff ‘nothing of him was found but his Linnen Shirt, and the hardest Bones, as I had the Relation from a Credible Person, Professor of that Trade’. (This is Otto Tachenius, writing in 1677. There is lot of this sort of thing. Overwhelming in its detail as it can be, Antimony, Gold, and Jupiter’s Wolf is wickedly entertaining.)

Wothers does not care to hold the reader’s hand. From page 1 he’s getting his hands dirty with minerals and earths, metals and the aforementioned urine (without which the alchemists, wanting chloride, sodium, potassium and ammonia, would have been at a complete loss) and we have to wait till page 83 for a discussion of how the modern conception of elements was arrived at. The periodic table doesn’t arrive till page 201 (and then it’s Mendeleev’s first table, published in 1869). Henri Becquerel discovers radioactivity barely four pages before the end of the book. It’s a surprising strategy, and a successful one. Readers fall under the spell of the possibilities of matter well before they’re asked to wrangle with any of the more highfalutin chemical concepts.

In 1782, Louis-Bernard Guyton de Morveau published his Memoir upon Chemical Denominations, the Necessity of Improving the System, and the Rules for Attaining a Perfect Language. Countless idiosyncracies survived his reforms. But chemistry did begin to acquire an orderliness that made Mendeleev’s towering work a century later — relating elements to their atomic structure — a deal easier.

This story has an end. Chemistry as a discipline is now complete. All the major problems have been solved. There are no more great discoveries to be made. Every chemical reaction we do is another example of one we’ve already done. These days, chemists are technologists: they study spectrographs, and argue with astronomers about the composition of the atmospheres around planets orbiting distant stars; they tinker in biophysics labs, and have things to say about protein synthesis. The heroic era of chemical discovery — in which we may fondly recall Gottfried Leibniz extracting phosphorus from 13,140 litres of soldiers’ urine — is past. Only some evocative words remain; and Wothers unpacks them with infectious enthusiasm, and something which in certain lights looks very like love.

Attack of the Vocaloids

Marrying music and mathematics for The Spectator, 3 August 2019

In 1871, the polymath and computer pioneer Charles Babbage died at his home in Marylebone. The encyclopaedias have it that a urinary tract infection got him. In truth, his final hours were spent in an agony brought on by the performances of itinerant hurdy-gurdy players parked underneath his window.

I know how he felt. My flat, too, is drowning in something not quite like music. While my teenage daughter mixes beats using programs like GarageBand and Logic Pro, her younger brother is bopping through Helix Crush and My Singing Monsters — apps that treat composition itself as a kind of e-sport.

It was ever thus: or was once 18th-century Swiss watchmakers twigged that musical snuff-boxes might make them a few bob. And as each new mechanical innovation has emerged to ‘transform’ popular music, so the proponents of earlier technology have gnashed their teeth. This affords the rest of us a frisson of Schadenfreude.

‘We were musicians using computers,’ complained Pete Waterman, of the synthpop hit factory Stock Aitken Waterman in 2008, 20 years past his heyday. ‘Now it’s the whole story. It’s made people lazy. Technology has killed our industry.’ He was wrong, of course. Music and mechanics go together like beans on toast, the consequence of a closer-than-comfortable relation between music and mathematics. Today, a new, much more interesting kind of machine music is emerging to shape my children’s musical world, driven by non-linear algebra, statistics and generative adversarial networks — that slew of complex and specific mathematical tools we lump together under the modish (and inaccurate) label ‘artificial intelligence’.

Some now worry that artificially intelligent music-makers will take even more agency away from human players and listeners. I reckon they won’t, but I realise the burden of proof lies with me. Computers can already come up with pretty convincing melodies. Soon, argues venture capitalist Vinod Khosla, they will be analysing your brain, figuring out your harmonic likes and rhythmic dislikes, and composing songs made-to-measure. There are enough companies attempting to crack it; Popgun, Amper Music, Aiva, WaveAI, Amadeus Code, Humtap, HumOn, AI Music are all closing in on the composer-less composition.

The fear of tech taking over isn’t new. The Musicians’ Union tried to ban synths in the 1980s, anxious that string players would be put out of work. The big disruption came with the arrival of Kyoko Date. Released in 1996, she was the first seriously publicised attempt at a virtual pop idol. Humans still had to provide Date with her singing and speaking voice. But by 2004 Vocaloid software — developed by Kenmochi Hideki at the Pompeu Fabra University in Barcelona — enabled users to synthesise ‘singing’ by typing in lyrics and a melody. In 2016 Hatsune Miku, a Vocaloid-powered 16-year-old artificial girl with long, turquoise twintails, went, via hologram, on her first North American tour. It was a sell-out. Returning to her native Japan, she modelled Givenchy dresses for Vogue.

What kind of music were these idoru performing? Nothing good. While every other component of the music industry was galloping ahead into a brave new virtualised future — and into the arms of games-industry tech — the music itself seemed stuck in the early 1980s which, significantly, was when music synthesizer builder Dave Smith had first come up with MIDI.

MIDI is a way to represent musical notes in a form a computer can understand. MIDI is the reason discrete notes that fit in a grid dominate our contemporary musical experience. That maddenning clockwork-regular beat that all new music obeys is a MIDI artefact: the software becomes unwieldy and glitch-prone if you dare vary the tempo of your project. MIDI is a prime example (and, for that reason, made much of by internet pioneer-turned-apostate Jaron Lanier) of how a computer can take a good idea and throw it back at you as a set of unbreakable commandments.

For all their advances, the powerful software engines wielded by the entertainment industry were, as recently as 2016, hardly more than mechanical players of musical dice games of the sort popular throughout western Europe in the 18th century.

The original games used dice randomly to generate music from precomposed elements. They came with wonderful titles, too — witness C.P.E. Bach’s A method for making six bars of double counterpoint at the octave without knowing the rules (1758). One 1792 game produced by Mozart’s publisher Nikolaus Simrock in Berlin (it may have been Mozart’s work, but we’re not sure) used dice rolls randomly to select beats, producing a potential 46 quadrillion waltzes.

All these games relied on that unassailable, but frequently disregarded truth, that all music is algorithmic. If music is recognisable as music, then it exhibits a small number of formal structures and aspects that appear in every culture — repetition, expansion, hierarchical nesting, the production of self-similar relations. It’s as Igor Stravinsky said: ‘Musical form is close to mathematics — not perhaps to mathematics itself, but certainly to something like mathematical thinking and relationship.’

As both a musician and a mathematician, Marcus du Sautoy, whose book The Creativity Code was published this year, stands to lose a lot if a new breed of ‘artificially intelligent’ machines live up to their name and start doing his mathematical and musical thinking for him. But the reality of artificial creativity, he has found, is rather more nuanced.

One project that especially engages du Sautoy’s interest is Continuator by François Pachet, a composer, computer scientist and, as of 2017, director of the Spotify Creator Technology Research Lab. Continuator is a musical instrument that learns and interactively plays with musicians in real time. Du Sautoy has seen the system in action: ‘One musician said, I recognise that world, that is my world, but the machine’s doing things that I’ve never done before and I never realised were part of my sound world until now.’

The ability of machine intelligences to reveal what we didn’t know we knew is one of the strangest and most exciting developments du Sautoy detects in AI. ‘I compare it to crouching in the corner of a room because that’s where the light is,’ he explains. ‘That’s where we are on our own. But the room we inhabit is huge, and AI might actually help to illuminate parts of it that haven’t been explored before.’

Du Sautoy dismisses the idea that this new kind of collaborative music will be ‘mechanical’. Behaving mechanically, he points out, isn’t the exclusive preserve of machines. ‘People start behaving like machines when they get stuck in particular ways of doing things. My hope is that the AI might actually stop us behaving like machines, by showing us new areas to explore.’

Du Sautoy is further encouraged by how those much-hyped ‘AIs’ actually work. And let’s be clear: they do not expand our horizons by thinking better than we do. Nor, in fact, do they think at all. They churn.

‘One of the troubles with machine-learning is that you need huge swaths of data,’ he explains. ‘Machine image recognition is hugely impressive, because there are a lot of images on the internet to learn from. The digital environment is full of cats; consequently, machines have got really good at spotting cats. So one thing which might protect great art is the paucity of data. Thanks to his interminable chorales, Bach provides a toe-hold for machine imitators. But there may simply not be enough Bartok or Brahms or Beethoven for them to learn on.’

There is, of course, the possibility that one day the machines will start learning from each other. Channelling Marshall McLuhan, the curator Hans Ulrich Obrist has argued that art is an early-warning system for the moment true machine consciousness arises (if it ever does arise).

Du Sautoy agrees. ‘I think it will be in the world of art, rather than in the world of technology, that we’ll see machines first express themselves in a way that is original and interesting,’ he says. ‘When a machine acquires an internal world, it’ll have something to say for itself. Then music is going to be a very important way for us to understand what’s going on in there.’

Art that hides in plain sight

Visiting Takis’s survey show at Tate Modern for the Spectator, 13 July 2019

Steel flowers bend in a ‘breeze’ generated by magnetic pendulums. This is the first thing you see as you enter Tate Modern’s survey show. And ‘Magnetic Fields’ (1969) is pretty enough: the work of this self-taught artist, now in his nineties, has rarely been so gentle, or so intuitive.

But there’s a problem. ‘I would like to render [electromagnetism] visible so as to communicate its existence and make its importance known,’ Takis has written. But magnetism hides in plain sight. A certain amount of interference is necessary before it will reveal itself.

Does the interference matter? Does the fact that gallery assistants have to activate this work every ten minutes spoil the ‘cosmicness’ of Takis’s art? The sculptor Alberto Giacometti thought so: ‘One day, during one of my exhibitions, he told me that he didn’t agree with my use of electricity for some of my works,’ Takis recalled in an interview in 1990. ‘He disliked the fact that if you switched off the power, the work would cease to function.’

Why Takis’s pieces should prompt such a finicky response isn’t immediately obvious. What do we expect of this stuff? Perpetual motion? One moment we wonder at the invisible force that can suspend delicate metal cones fractions of an inch above the surface of a canvas. The next moment, we’re peering where we shouldn’t, trying to figure out the circuitry that keeps a sphere swinging over a steel wire.

We’re presented with many wonders — objects rendered weightless, or put into permanent vibration. And as the show progresses (it’s surprisingly large, designed to unfold around corners and spring surprises at your back) the work gets less intuitive, and a lot louder. A pendulum, orbiting a strong, floor-mounted magnet, whips eccentrically and not at all gently about its centre of attraction. It’s like nothing in visible nature. There’s no ‘magnetic breeze’ here, no ‘force like gravity’, just the thing, the weirdness itself. Now we’re getting somewhere.

Born Panayiotis Vassilakis in 1925, Takis discovered his alchemical calling early. One memoir recalls how ‘as a small boy, he would bury pieces of broken glass and other such oddments in the ground to see what happened to them when he impatiently dug them out a couple of days later’. In 1954 he moved to Paris, where he fell in with Marcel Duchamp and Yves Tanguy. In London he inspired a group of young artists who went on to create the politically radical Signals London gallery. In America the beats admired him, the Massachusetts Institute of Technology gave him a fellowship, and the composer John Cage encouraged his shamanism. (‘I cannot think of my work as entirely my work,’ Takis writes. ‘In a sense, I’m only a transmitter.’)

Takis treads the same awkward line in visual art that Cage did in music. Cage promised us that behind the music of signs lay some sort of sonic essence. But his snark hunt proved rather dull. Takis’s own search ends more happily, if only because the eye, in its search for signs, doesn’t admit defeat nearly as quickly as the ear. Takis’s traffic signals, stripped of context and perched on tall poles, become eyes full of sadness and yearning. They still mean something. They’re still signs of something.

Made from oddments plucked from boxes of army and air-force surplus on Tottenham Court Road, some of Takis’s more engineered work has dated. We look at it as a sort of industrial archaeology. Its radicalism, its status as ‘anti-technology’, is hard to fathom.

But the simpler pieces need no translation. They are (suitably enough, for an artist whose works often screech and rattle) a sort of visual equivalent of music. They do not mean anything. They are meaning. They reflect harmonious relationships between energy and space and mass. Takis’s work is like his subject: it hides in plain sight.

“And it will no longer be necessary to ransack the earth…”

Visiting Raw Materials: Plastics at the Nunnery Gallery, Bow Arts, for the Spectator, 1 June 2019

Plastics — even venerable, historically eloquent plastics — hardly draw the eye. As this show’s insightful accompanying publication (a snip at £3) would have it, ‘Plastics have no intrinsic form or texture, thus they are not materials that can be true to themselves.’ They exist within inverted commas. They can be shell-like, horn-like, stony, metallic — they do not really exist on their own behalf.

Mind you, the first vitrine in Raw Materials: Plastics at the Nunnery Gallery in east London contains an object of rare beauty: a small, mottled, crazed, discoloured sphere that looks for all the world like the planet Venus, reduced to handy scale.

It’s a billiard ball, made of the first plastic: cellulose nitrate. Its manufacture had been keenly anticipated. In the US, a $10,000 prize had been offered for anything that could replace ivory in the manufacture of billiard balls (and no wonder: a single tusk yields only three balls).

Under various brand names (Celluloid, Parkesine, Xylonite), and in spite of its tendency to catch fire (colliding snooker balls would occasionally explode), cellulose nitrate saved the elephant. And not just the elephant: plastics pioneer John Wesley Hyatt reckoned that ‘Celluloid [has] given the elephant, the tortoise, and the coral insect a respite in their native haunts; and it will no longer be necessary to ransack the earth in pursuit of substances which are constantly growing scarcer.’

The whole point of plastic is that it has no characteristics of its own, only properties engineered for specific uses. Cheaper than jade. Less brittle than bone. It’s the natural material of the future, always more becoming than being. Hence the names: Xylonite. Bexoid. Halex. Lactoid.

Unable to nail the material in words, one writes instead about its history, sociology, industrial archaeology or ecological impact. On remote islands in the Pacific, thousands of albatross chicks are starving because the parents mistake floating plastic debris for food. Stories like this conjure up a vision of vast islands of discarded plastic coagulating in the Pacific Ocean, but there aren’t any. Instead, plastics eventually fragment into ever smaller pieces that are ingested by marine animals and carried to the sea bottom. In the Mariana Trench, all crustaceans tested had plastics in their guts. So plastics rise and fall through the food chain, creating havoc as they go — a bitter irony for a material that saved the elephant and the turtle, made fresh food conveyable and modern medicine possible, and all for less than 15 per cent of global oil consumption.

What can be gained from looking at the stuff itself? Raw Materials: Plastics transcends the limitations of its material by means of a good story. The first plastics were made in the Lea Valley, not from crude oil, but from plant materials, in a risky, artisanal fashion that bore, for a while, the hallmarks of older crafts including baking, woodcutting and metalwork. Fast-forward 140 years or so and, under the umbrella term ‘bioplastics’, plant-based and biodegradable synthetic products promise to turn the wheel of development full circle, returning plastics to their organic roots. (Designer Peter Marigold’s FORMCard plastic, used here in an excellent school art project, is a starch-based bioplastic made from potato skins.) Then, perhaps, we can break the bind in which we currently find ourselves: the one in which we’re poisoning the planet with plastic in our efforts not to further despoil it.

This is the third and for my money the most ambitious of the gallery’s ongoing series of small, thoughtful exhibitions about the materials, processes and industries that have shaped London’s Lea Valley. (Raw Materials: Wood ran in 2017; Raw Materials: Textiles last year.) The show is more chronicle than catalogue, but the art, scant as it is, punches above its weight.

I was struck, in particular, by France Scott’s ‘PHX [X is for Xylonite]’, a 13-minute collage of photogrammetry, laser scanning and 16mm film. It ought, by all logic, to be a complete mess and I still haven’t been able to work out why it’s so compelling. Is it because digital artefacts, like their plastic forebears, are themselves prisoners of contingency, aping the forms of others while stubbornly refusing to acquire forms of their own?

“The English expedition of 1919 is to blame for this whole misery”

Four books to celebrate the centenary of  Eddington’s 1919 eclipse observations. For The Spectator, 11 May 2019.

Einstein’s War: How relativity triumphed amid the vicious nationalism of World War I
Matthew Stanley
Dutton

Gravity’s Century: From Einstein’s eclipse to images of black holes
Ron Cowen
Harvard University Press

No Shadow of a Doubt
Daniel Kennefick
Princeton University Press

Einstein’s Wife: The real story of Mileva Einstein-Maric
Allen Esterson and David C Cassidy; contribution by Ruth Lewin Sime.
MIT Press

On 6 November 1919, at a joint meeting of the Royal Astronomical Society and the Royal Society, held at London’s Burlington House, the stars went all askew in the heavens.
That, anyway, was the rhetorical flourish with which the New York Times hailed the announcement of the results of a pair of astronomical expeditions conducted in 1919, after the Armistice but before the official end of the Great War. One expedition, led by Arthur Stanley Eddington, assistant to the Astronomer Royal, had repaired to the plantation island of Principe off the coast of West Africa; the other, led by Andrew Crommelin, who worked at the Royal Greenwich Observatory, headed to a racecourse in Brazil. Together, in the few minutes afforded by the 29 May solar eclipse, the teams used telescopes to photograph shifts in the apparent location of stars as the edge of the sun approached them.

The possibility that a heavy body like the sun might cause some distortion in the appearance of the star field was not particularly outlandish. Newton, who had assigned “corpuscles” of light some tiny mass, supposed that such a massive body might draw light in like a lens, though he imagined the effect was too slight to be observable.

The degree of distortion the Eddington expeditions hoped to observe was something else again. 1.75 arc-seconds is roughly the angle subtended by a coin, a couple of miles away: a fine observation, but not impossible at the time. Only the theory of the German-born physicist Albert Einstein — respected well enough at home but little known to the Anglophone world — would explain such a (relatively) large distortion, and Eddington’s confirmation of his hypothesis brought the “famous German physician” (as the New York Times would have it) instant celebrity.

“The English expedition of 1919 is ultimately to blame for this whole misery, by which the general masses seized possession of me,” Einstein once remarked; but he was not so very sorry for the attention. Forget the usual image of Einstein the loveable old eccentric. Picture instead a forty-year-old who, when he steps into a room, literally causes women to faint. People wanted his opinions even about stupid things. And for years, if anyone said anything wise, within a few months their words were being attributed to Einstein.

“Why is it that no one understands me and everyone likes me?” Einstein wondered. His appeal lay in his supposed incomprehensibility. Charlie Chaplin understood: “They cheer me because they all understand me,” he remarked, accompanying the theoretical physicist to a film premiere, “and they cheer you because no one understands you.”

Several books serve to mark the centenary of the 1919 eclipse observations. Though their aims diverge, they all to some degree capture the likeness of Einstein the man, messy personal life and all, while rendering his physics a little bit more comprehensible to the rest of us. Each successfully negotiates the single besetting difficulty facing books of this sort, namely the way science lends itself to bad history.

Science uses its past as an object lesson, clearing all the human messiness away to leave the ideas standing. History, on the other hand factors in as much human messiness as possible to show how the business of science is as contingent and dramatic as any other human activity.

While dealing with human matters, some ambiguity over causes and effects is welcome. There are two sides to every story, and so on and so forth: any less nuanced approach seems suspiciously moralistic. One need only look at the way various commentators have interpreted Einstein’s relationship with his first wife.

Einstein was, by the end of their failing marriage, notoriously horrible to Mileva Einstein-Maric; this in spite of their great personal and intellectual closeness as first-year physics students at the Federal Swiss Polytechnic. Einstein once reassured Elsa Lowenthal, his cousin and second-wife-to-be, that “I treat my wife as an employee I can not fire.” (Why Elsa, reading that, didn’t run a mile, is not recorded.)

Albert was a bad husband. His wife was a mathematician. Therefore Albert stole his theory of special relativity from Mileva. This shibboleth, bandied about since the 1970s, is a sort of of evil twin of whig history, distorted by teleology, anachronism and present-mindedness. It does no one any favours. The three separately authored parts of Einstein’s Wife: The real story of Mileva Einstein-Maric unpick the myth of Mileva’s influence over Albert, while increasing, rather than diminishing, our interest in and admiration of the woman herself. It’s a hard job to do well, without preciousness or special pleading, especially in today’s resentment-ridden and over-sensitive political climate, and the book is an impressive, compassionate accomplishment.
Matthew Stanley’s Einstein’s War, on the other hand, tips ever so slightly in the other direction, towards the simplistic and the didactic. His intentions, however, are benign — he is here to praise Einstein and Eddington and their fellows, not bury them — and his slightly on-the-nose style is ultimately mandated by the sheer scale of what he is trying to do, for he succeeds in wrapping the global, national and scientific politics of an era up in a compelling story of one man’s wild theory, lucidly sketched, and its experimental confirmation in the unlikeliest and most exotic circumstances.

The world science studies is truly a blooming, buzzing confusion. It is not in the least bit causal, in the ordinary human sense. Far from there being a paucity of good stories in science, there are a limitless number of perfectly valid, perfectly accurate, perfectly true stories, all describing the same phenomenon from different points of view.

Understanding the stories abroad in the physical sciences at the fin de siecle, seeing which ones Einstein adopted, why he adopted them, and why, in some cases, he swapped them for others, certainly doesn’t make his theorising easy. But it does give us a gut sense of why he was so baffled by the public’s response to his work. The moment we are able to put him in the context of co-workers, peers and friends, we see that Einstein was perfecting classical physics, not overthrowing it, and that his supposedly peculiar theory of relativity — as the man said himself –“harmonizes with every possible outlook of philosophy and does not interfere with being an idealist or materialist, pragmatist or whatever else one likes.”

In science, we need simplification. We welcome a didactic account. Choices must be made, and held to. Gravity’s Century by the science writer Ron Cowen is the most condensed of the books mentioned here; it frequently runs right up to the limit of how far complex ideas can be compressed without slipping into unavoidable falsehood. I reckon I spotted a couple of questionable interpretations. But these were so minor as to be hardly more than matters of taste, when set against Cowen’s overall achievement. This is as good a short introduction to Einstein’s thought as one could wish for. It even contrives to discuss confirmatory experiments and observations whose final results were only announced as I was writing this piece.

No Shadow of a Doubt is more ponderous, but for good reason: the author Daniel Kennefick, an astrophysicist and historian of science, is out to defend the astronomer Eddington against criticisms more serious, more detailed, and framed more conscientiously, than any thrown at that cad Einstein.

Eddington was an English pacifist and internationalist who made no bones about wanting his eclipse observations to champion the theories of a German-born physicist, even as jingoism reached its crescendo on both sides of the Great War. Given the sheer bloody difficulty of the observations themselves, and considering the political inflection given them by the man orchestrating the work, are Eddington’s results to be trusted?

Kennefick is adamant that they are, modern naysayers to the contrary, and in conclusion to his always insightful biography, he says something interesting about the way historians, and especially historians of science, tend to underestimate the past. “Scientists regard continuous improvement in measurement as a hallmark of science that is unremarkable except where it is absent,” he observes. “If it is absent, it tells us nothing except that someone involved has behaved in a way that is unscientific or incompetent, or both.” But, Kennefick observes, such improvement is only possible with practice — and eclipses come round too infrequently for practice to make much difference. Contemporary attempts to recreate Eddington’s observations face the exact same challenges Eddington did, and “it seems, as one might expect, that the teams who took and handled the data knew best after all.”

It was Einstein’s peculiar fate that his reputation for intellectual and personal weirdness has concealed the architectural elegance of his work. Higher-order explanations of general relativity have become clichés of science fiction. The way massive bodies bend spacetime like a rubber sheet is an image that saturates elementary science classes, to the point of tedium.

Einstein hated those rubber-sheet metaphors for a different reason. “Since the mathematicians pounced on the relativity theory,” he complained, “I no longer understand it myself.” We play about with thoughts of bouncy sheets. Einstein had to understand their behaviours mathematically in four dimensions (three of space and one of time), crunching equations so radically non-linear, their results would change the value of the numbers originally put into them in feedback loops that drove the man out of his mind. “Never in my life have I tormented myself anything like this,” he moaned.

For the rest of us, however, A little, prophylactic exposure to Einstein’s actual work pays huge dividends. It sweeps some of the weirdness away and reveals Einstein’s actual achievement: theories that set all the forces above the atomic scale dancing with an elegance Isaac Newton, founding father of classical physics, would have half-recognised, and wholly admired.

 

When robots start caring

A glimpse into our Eloi future for the Spectator, 3 February 2018
(There’s also a podcast.)

‘I gotta be me,’ Sammy Davis Jr. croons as the android Dolores Abernathy (Evan Rachel Wood) steadies her horse, stands up on her stirrups, takes aim with her Winchester, and picks off her human masters one by one.

The trailer’s out at last and the futuristic TV series Westworld is set to return in the spring. It’s a prescient show, but not in the ways you might expect. It’s not about robot domination. Westworld is about an uprising of pleasure cyborgs in a futuristic resort. It is, for all its gunplay, about love. And that makes it a very timely show indeed.

In the real world, robots are actually being designed to love us — to fill traditional caring roles for which we have neither the time, energy, nor resources. Robots are being built to help the elderly, nurse the sick and tend the children. Pundits often take this as evidence of our selfish, lazy, reprehensible present. But we’ve been working towards this moment for a very long time, and would it really be so very bad?

If you think that families should look after their own elderly, you’ll need to explain why in south-east Asia, traditionally a region of three- and even four-generation family units, nouveau-riche gated retirement communities are springing up like mushrooms after a spring rain. Perhaps the elderly don’t long to live among us, as we imagine. Perhaps poverty is the only thing nailing Grandma to the family couch. As for the sick, we’ve long since been consigning them to institutions, be they care homes, hospitals or hospices, where people who are better-trained promise to look after them.

The question is not whether we should employ robots. Given the lousiness of some institutions, why on earth wouldn’t we? The question is whether the robots we employ will be any good, and whether we can accept them as substitute humans. We’d like to think not, but there’s evidence to suggest that we’ll bond with even a basic machine far more easily than we’d like to believe.

In 2011, Takanori Shibata, a Japanese engineer, turned up on the coast of tsunami-wracked Fukushima and handed out around 80 robotic seal pups to the victims of the disaster. Refugees warmed to the robots: many have held on to them and continue to look after them. Shibata could have turned up with puppies, or kittens or guinea pigs and would probably have achieved greater therapeutic impact. But who has the money and time to feed and look after 80 animals in a disaster zone? Pets need care and attention — a point not lost on the residential homes that employ Shibata’s robot seals to comfort their elderly, often demented, charges. A single ‘Paro’ — an acronym that roughly translates as ‘personal robot — costs around $5,000. A real-life therapy dog may cost more than $50,000 over its lifetime.

Paro isn’t much of a robot. It can move its head, neck, eyelids, flippers and tail. It responds to the human voice and to touch. It understands simple words and phrases (the sort we use with pets and babies). It knows when it’s being treated well, and when it’s being roughly handled. Its cries (made from digitally sampled baby seal sounds) have a discernible emotional range. It’s old news —the first Paros were sold in 1998 — but it’s making headlines again this year because the ninth generation model is being assessed for use on long space journeys. Mars colonists, permanently deprived of wider human society, will find consolation in a robotic animal chosen for its inability to disappoint. Robot dogs are a let-down because we know what pet dogs are like. How many pet seals do you know? Paro’s very blandness is its point. Its easy, undemanding displays of personal affection reduce stress, anxiety, depression, wandering and aggression among the demented of 30 countries. It must be only a matter of time before Paro makes it into the ‘safe spaces’ on university campus.

Kaspar, designed by the University of Hertfordshire’s Adaptive Systems Research Group, is hardly more sophisticated in appear-ance: a bland foot-high doll in a check shirt. It’s not really a robot — more a mechanical puppet, controlled remotely by researchers. Its expressive minimalism and extreme simplicity reassure the children it plays with — those with severe autism or those who have suffered trauma and abuse.

According to Living with Robots, Paul Dumouchel and Luisa Damiano’s recent survey of social robotics, robots are likely to be stuck in this uncanny state for some time, while we try to codify what ‘behaving like a human being’ actually means. We have vast knowledge of ourselves as social beings, of course, evidenced by millennia of cultural output from Dream of the Red Chamber to Breaking Bad. What we lack is a high-level description of human behaviour of the sort that can find its way into computer code. We all know why we laugh, cry, blush and commit suicide, but we have not the slightest idea what laughing, crying, blushing and committing suicide are for. This is why social robots attract so much academic attention: they are an experimental apparatus, through which we study ourselves.

Countless robot nurse prototypes, with names like Terapio and Robear, are under trial. The problems they are meant to address are real. We have conquered disease to the point where people regularly stay healthy into their nineties. This is why the US has as many people over 85 as children under five and China has 100 million senior citizens to look after. Someone or something needs to look after us in our dotage. Then there are the edge cases: those social wrinkles we could conceivably iron out with robots, but not without consequence. Should we roll out sex robots to address the uneven gender ratios in China? Straight men right now have next to no opportunity for sexual companionship: don’t they deserve some comfort?

Not according to Kathleen Richardson and Erik Brilling, whose Campaign Against Sex Robots, launched in 2015, declares that sex with an animate object that lacks agency can only brutalise us. Notwithstanding that sex robots are a bit rubbish, this particular rabbit hole swallows academics by the ton.

Nations with the most intractable demographic problems are the ones most entranced by the promise of robotics. Japan’s population is crashing as a generation of young people eschews sex. A third of men under 30 have never dated. Women prefer singledom to the life of penury and drudgery afforded by Japanese marriage. A new book by Jennifer Robertson, Robo Sapiens Japanicus: Robots, Gender, Family, and the Japanese Nation, unpicks the Japanese government’s published blueprint for revitalising the nation’s households by 2025. If we can only build robots to do the housework, the argument runs, then women will have more time for having babies. Once again, technology is being promoted not because it ushers in the future but because it preserves the past. (A driverless car is still, after all, a car: not a bus or a train or a decent broadband link. And a robot servant is still a servant.)

On the one hand, robots are like Uber and the spinning jenny. They promise to increase production while preserving the institutions of capital. They’re disruptive right up to the point where something might happen to the money. A more intriguing threat is the one directed at our own social lives. Surrounded by dull, bland, easy-going robot companions, will we come to expect less of other people? Assisted, cared for, and even seduced silly by machines, will we lower our expectations around concepts like ‘conversation’, ‘care’, ‘companionship’ and ‘love’?

Paro and Kaspar are creepy not for what they are — clinical tools, improving the lives of vulnerable people — but for what they portend: a world in which you and I find Paro and Kaspar a sufficient substitute for other people. ‘Robotic companionship may seem a sweet deal,’ wrote the social scientist Sherry Turkle back in 2011, ‘but it consigns us to a closed world — the loveable as safe and made to measure.’ Will our constant association with such easy-going, selfless-because-characterless robots make us emotionally lazy?

We’ve imagined this sort of future many times. Hesiod was writing poems about ineluctable degeneration around 700 BC. H.G. Wells’s The Time Machine (1895) imagines a world in which the beautiful, sensitive people — the Eloi — have all the savvy of veal calves and ‘civilisation’ has turned out to be nothing but a process of self-domestication. And it’s true: civilisation is as much about forgetting, and attendant helplessness, as it is about learning. In my own lifetime, handwriting and mental arithmetic have gone to the wall, and the art of everyday literary nuance is being ousted by the application of quick, characterful emoji. Having to learn new skills is a nuisance. Having to dispense with skills already acquired is a little death: a diminution of the spirit.

The pioneering psychologist William James argued that what we want from a lover is that they really love us, and not simply behave as if they did. I hope that’s true. If we come to believe that the soul is nothing more than behaviour, then of course a robot will become just as good as a person. Why even bother to build better robots? An Eloi future beckons: all we have to do is lower our expectations.

 

 

How we went from mere betting to gaming the world

Reviewing The Perfect Bet: How science and maths are taking the luck out of gambling by Adam Kucharski, for The Spectator, 7 May 2016.

If I prang your car, we can swap insurance details. In the past, it would have been necessary for you to kill me. That’s the great thing about money: it makes liabilities payable, and blood feud unnecessary.

Spare a thought, then, for the economist Robin Hanson whose idea it was, in the years following the World Trade Center attacks, to create a market where traders could speculate on political atrocities. You could invest in the likelihood of a biochemical attack, for example, or a coup d’etat, or the assassination of an Arab leader. The more knowledgeable you were, the more profit you would earn — but you would also be showing your hand to the Pentagon.

The US Senate responded with horror to this putative “market in death and destruction”, though if the recent BBC drama The Night Manager has taught us anything at all (beyond the passing fashionability of tomato-red chinos), it is that there is already a global market in death and destruction, and it is not at all well-abstracted. Its currency is lives and livelihoods. Its currency is blood. A little more abstraction, in this grim sphere, would be welcome.

Most books about money stop here, arrested — whether they admit it or not, in the park’n’ride zone of Francis Fukuyama’s 1989 essay “The End of History?” Adam Kucharski — a mathematician who lectures at the London School of Hygiene and Tropical Medicine — keeps his foot on the gas. The point of his book is that abstraction makes speculation, not just possible, but essential. Gambling isn’t any kind of “underside” to the legitimate economy. It is the economy’s entire basis, and “the line between luck and skill — and between gambling and investing — is rarely as clear as we think.” (204)

When we don’t know everything, we have to speculate to progress. Speculation is by definition an insecure business, so we put a great deal of effort into knowing everything. The hope is that, the more cards we count, and the more attention we pay to the spin of the wheel, the more accurate our bets will become. This is the meat of Kucharski’s book, and occasions tremendous, spirited accounts of observational, mathematical, and computational derring-do among the blackjack and roulette tables of Las Vegas and Monte Carlo. On one level, The Perfect Bet is a serviceable book about professional gambling.

When we come to the chapter on sports betting, however, the thin line between gambling and investment vanishes entirely, and Kucharski carries us into some strange territory indeed.

Lay a bet on a tennis match: “if one bookmaker is offering odds of 2.1 on Nadal and another is offering 2.1 on Djokovic, betting $100 on each player will net you $210 — and cost you $100 — whatever the result. Whoever wins, you walk away with a profit of $10.” (108) You don’t need to know anything about tennis. You don’t even need to know the result of the match.

Ten dollars is not a great deal of money, so these kinds of bets have to be made in bulk and at great speed to produce a healthy return. Which is where the robots come in: trading algorithms that — contrary to popular myth — are made simple (rarely running to more than ten lines of code) to keep them speedy. This is no small problem when you’re trying to automate the business of gaming the entire world. In 2013 — around the time the US Senate stumbled across Robin Hanson’s “policy market” idea, the S&P 500 stock index took a brief $136 billion dive when trading algorithms responded instantly to a malicious tweet claiming bombs had gone off in the White House.

The subtitle of Kucharski’s book states that “science and maths are taking the luck out of gambling”, and there’s little here to undercut the gloomy forecast. But Kucharski is also prosecuting a cleverer, more entertaining, and ultimately more disturbing line of argument. He is placing gambling at the heart of the body politic.

Risk reduction is every serious gambler’s avocation. The gambler is not there to take part. The gambler isn’t there to win. The gambler is there to find an edge, spot the tell, game the table, solve the market. The more parts, and the more interactions, the harder this is to do, but while it is true that the world is not simply deterministic, at a human scale, frankly, it might as well be.

In this smartphone-enabled and metadata-enriched world, complete knowledge of human affairs is becoming more or less possible. And imagine it: if we ever do crack our own markets, then the scope for individual action shrinks to a green zero. And we are done.