“These confounded dials…”

Reading The Seven Measures of the World by Piero Martin and Four Ways of Thinking by David Sumpter, for New Scientist, 23 October 2023

Blame the sundial. A dinner guest in a play by the Roman writer Plautus, his stomach rumbling, complains that

“The town’s so full of these confounded dials
The greatest part of the inhabitants,
Shrunk up with hunger, crawl along the streets”

We’ve been slaves to number ever since. Not that we need complain, according to two recent books. Piero Martin’s spirited and fascinating The Seven Measures of the World traces our ever-more precise grasp of physical reality, while Four Ways of Thinking, by the Uppsala-based mathematician David Sumpter, shows number illuminating human complexities.

Martin’s stories about common units of measure (candelas and moles rub shoulders here with amperes and degrees Kelvin) tip their hats to the past. The Plautus quotation is Martin’s, as is the assertion (very welcome to this amateur pianist) that the unplayable tempo Beethoven set for his “Hammerklavier” sonata (138 beats per minute!) was caused by a broken metronome.

Martin’s greater purpose is to trace, in the way we measure our metres and minutes, kilogrammes and candelas, the outline of “a true Copernican revolution”.

In the past fundamental constants were determined with reference to material prototypes. In November 2018 it was decided to define international units of measure in reference to the constants themselves. The metre is now defined indirectly using the length of a second as measured by atomic clocks, while the definition of a kilogramme is defined as a function of two physical constants, the speed of light, c, and Planck’s constant, h. The dizzying “hows” of this revolution beg not a few “whys”, but Martin is here to explain why such eye-watering accuracy is vital to the running of our world.

Sumpter’s Four Ways of Thinking is more speculative, organising reality around the four classes of phenomena defined by mathematician Stephen Wolfram’s little-read 1,192-page opus from 2002, A New Kind of Science. Sumpter is quick to reassure us that that his homage to the eccentric and polymathic Wolfram is not so much “a new kind of science” as “a new way to convince your friends to go jogging with you” or perhaps “a new way of controlling chocolate cake addiction.”

The point is, all phenomena are mathematically speaking, either stable, periodic, chaotic, or complex. Learn the differences between these phenomena, and you are half way to better understanding your own life.

Much of Four Ways is assembled semi-novelistically around a summer school in complex systems that Sumpter attended at the Santa Fe Institute in 1997. His half-remembered, half-invented mathematical conversations with fellow attendees won me over, though I have a strong aversion to exposition through dialogue.

I incline to think Sumpter’s biographical sketches are stronger. The strengths and weaknesses of statistical thinking are explored through the life of Ronald Fisher, the unlovely genius who in the 1940s a almost single-handedly created the foundations for statistical science.

That the world does not stand still to be measured, and is often best considered a dynamical system, is an insight given to Alfred Lotka, the chemist who in the first half of the 20th century came tantalisingly close to formulating systems biology.

Chaotic phenomena are caught in a sort of negative image through the work of NASA software engineer Margaret Hamilton, whose determination never to make a mistake — indeed, to make mistakes in her code impossible — landed the crew of Apollo 11 on the Moon.

Soviet mathematician Andrej Kolmogorov personifies complex thinking, as he abandons the axiom-based approach to mathematics and starts to think in terms of information and computer code.

Can mathematics really elucidate life? Do we really need mathematical thinking to realise that “each of us follows our individual rules of interaction and out of that emerges the complexity of our society”? Maybe not. But the journey was gripping.

 

 

Ideas are like boomerangs

Reading In a Flight of Starlings: The Wonder of Complex Systems by Giorgio Parisi for The Telegraph, 1 July 2023

“Researchers,” writes Giorgio Parisi, recipient of the 2021 Nobel Prize in Physics, “often pass by great discoveries without being able to grasp them.” A friend’s grandfather identified and then ignored a mould that killed bacteria, and so missed out on the discovery of penicillin. This story was told to Parisi in an attempt to comfort him for the morning in 1970 he’d spent with another hot-shot physicist, Gerard ‘t Hooft, dancing around what in hindsight was a perfectly obvious application of some particle accelerator findings. Having teetered on the edges of quantum chromodynamics, they walked on by; decades would pass before either man got another stab at the Nobel. “Ideas are often like boomerangs,” Parisi explains, and you can hear the sigh in his voice; “they start out moving in one direction but end up going in another.”

In a Flight of Starlings is the latest addition to an evergreen genre: the scientific confessional. Read this, and you will get at least a frisson of what a top-flight career in physics might feel like.

There’s much here that is charming and comfortable: an eminent man sharing tales of a bygone era. Parisi began his first year of undergraduate physics in November 1966 at Sapienza University in Rome, when computer analysis involved lugging about (and sometimes dropping) metre-long drawers of punched cards.

The book’s title refers to Parisi’s efforts to compute the murmurations of starlings. Recently he’s been trying to work out how many solid spheres of different sizes will fit into a box. There’s a goofiness to these pet projects that belies their significance. The techniques developed to follow thousands of starlings through three dimensions of space and one of time bear a close resemblance to those used to solve statistical physics problems. And fitting marbles in a box? That’s a classic problem in information theory.

The implications of Parisi’s work emerge slowly. The reader, who might, in all honesty, be touched now and again by boredom, sits up straighter once the threads begin to braid.

Physics for the longest time could not handle complexity. Galileo’s model of the physical world did not include friction, not because friction was any sort of mystery, but because the mathematics of his day couldn’t handle it.

Armed with better mathematics and computational tools physics can now study phenomena that Galileo could never have imagined would be part of physics. For instance, friction. For instance, the melting of ice, and the boiling of water: phenomena that, from the point of view of physics, are very strange indeed. Coming up with models that explain the phase transitions of more complex and disordered materials, such as glass and pitch, is something Parisi has been working on, on and off, since the middle of the 1990s.

Efforts to model more and more of the world are nothing new, but once rare successes now tumble in upon the field at a dizzying rate; almost as though physics has undergone its own phase transition. This, Parisi says, is because once two systems in different fields of physics can be described by the same mathematical structure, “a rapid advancement of knowledge takes place in which the two fields cross-fertilize.”

This has clearly happened in Parisi’s own specialism. The mathematics of disorder apply whether you’re describing why some particles try to spin in opposite directions, or why certain people sell shares that others are buying, or what happens when some dinner guests want to sit as far away from other guests as possible.

Phase transitions eloquently connect the visible and quantum worlds. Not that such connections are particularly hard to make. Once you know the physics, quantum phenomena are easy to spot. Ever wondered at a rainbow?

“Much becomes obvious in hindsight,” Parisi writes. “Yet it is striking how in both physics and mathematics there is a lack of proportion between the effort needed to understand something for the first time and the simplicity and naturalness of the solution once all the required stages have been completed.”

The striking “murmurations” of airborne starlings are created when each bird in the flock pays attention to the movements of its nearest neighbour. Obvious, no?

But as Parisi in his charming way makes clear, whenever something in this world seems obvious to us, it is likely because we are perched, knowingly or not, on the shoulders of giants.

A pile of dough

Reading Is Maths Real? by Eugenia Cheng, 17 May 2023

Let’s start with an obvious trick question: why does 1 plus 1 equal 2? Well, it often doesn’t. Add one pile of dough to one pile of dough and you get, well, one pile of dough.

This looks like a twisty and trivial point, but it isn’t. Mathematics describes the logical operations of logical worlds, but you can dream up any number of those, and you’re going to need many more than one of them to even come close to modelling the real world.

“Deep down,” writes mathematician Eugenia Cheng, “maths isn’t about clear answers, but about increasingly nuanced worlds in which we can explore different things being true.”

Cheng wants the reader to ask again all those “stupid” questions they asked about mathematics as kids, and so discover what it feels like to be a real mathematician. Sure enough, mathematicians turn out to be human beings, haunted by doubts, saddled with faulty memories, blessed with unsuspected resources of intuition, guided by imagination. Mathematics is a human pursuit, depicted here from the inside.

We begin in the one-dimensional world of real numbers, and learn in what kinds of worlds numbers can be added together in any order (“commutativity”) and operations grouped together any-old-how (“associativity”). Imaginary numbers (which can’t be expressed as digits; think pi) add a second dimension to our mathematical world, and sure enough there are now patterns we can see that we couldn’t see before, “when we were all squashed into one dimension”.

Keep adding dimensions. (The more we add to our mathematical universe, however, the less we can rely on our visual imagination, and the more we come to rely on algebra.) Complex numbers (which have a real part and an imaginary part) give us the field of complex analysis, on which modern physics depends.

And we don’t stop there. Cheng’s object is not to teach us maths, but to show us what we don’t know; we eventually arrive at a terrific description of mathematical braids in higher dimensions that at the very least we might find interesting, even if we don’t understand it. This is the generous impulse driving this book, and it’s splendidly realised.

Alas, Is Maths Real?, not content with being a book about what it is like to be a mathematician, also wants to be a book about what it is like to be Eugenia Cheng, and success, in this respect, leads to embarrassment.

We’ll start with the trivia and work up.

There’s Cheng’s inner policeman, reminding her, as she discusses the role of pictures in mathematics “to acknowledge that this is thus arguably ableist and excludes those who can’t see.”

There are narcissistic exclamations that defy parody, as when Cheng explains that “the only thing I want everyone to care about is reducing human suffering, violence, hunger, prejudice, exclusion and heartbreak.” (Good to know.)

There are the Soviet-style political analogies for everything. Imaginary and complex numbers took a while to be accepted as numbers because, well, you know people: “some people lag behind, perhaps accepting women and black people but not gay people, or maybe accepting gay, lesbian and bisexual people but not transgender people.”

A generous reader may simply write these irritations off, but then Cheng’s desire to smash patriarchal power structures with the righteous hammer of ethnomathematics (which looks for “other types of mathematics” overlooked, undervalued or suppressed by the colonialist mainstream) tips her into some depressingly hackneyed nonsense. “Contemporary culture,” she tells us, “is still baffled by how ancient cultures were able to do things like build Stonehenge or construct the pyramids.”

Really? The last time I looked, the answers were (a) barges and (b) organised labour.

Cheng tells us she is often asked how she comes up with explanations and diagrams that bring clarity “to various sensitive, delicate, nuanced and convoluted social arguments.” Her training in the discipline of abstract mathematics, she explains, “makes those things come to me very smoothly.”

How smoothly? Well, quite early in the book, “intolerance of intolerance” becomes “tolerance” through a simple mathematical operation — a pratfall in ethics that makes you wonder what kind of world Cheng lives in. Cheng’s abstract mathematics may well be able solve her real-world problems — but I suspect most other people’s worlds feel a deal less tractable.

“Von Neumann proves what he wants”

Reading Ananyo Bhattacharya’s The Man from the Future for The Telegraph, 7 November 2021

Neumann János Lajos, born in Budapest in 1903 to a wealthy Jewish family, negotiated some of the most lethal traps set by the twentieth century, and did so with breathtaking grace. Not even a painful divorce could dent his reputation for charm, reliability and kindness.

A mathematician with a vise-like memory, he survived, and saved others, from the rise of Nazism. He left Austria and joined Princeton’s Institute of Advanced Study when he was just 29. He worked on ballistics in Second World War, atom and hydrogen bombs in Cold War. Disturbed yet undaunted by the prospect of nuclear armageddon, he still found time to develop game theory, to rubbish economics, and to establish artificial intelligence as a legitimate discipline.

He died plain ‘Johnny von Neumman’, in 1957, at the Walter Reed Army Medical Center in Washington, surrounded by heavy security in case, in his final delirium, he spilled any state secrets.

Following John Von Neumann’s life is rather like playing chess against a computer: he has all the best moves already figured out. ‘A time traveller,’ Ananyo Bhattacharya calls him, ‘quietly seeding ideas that he knew would be needed to shape the Earth’s future.’ Mathematician Rózsa Péter’s assessment of von Neumann’s powers is even more unsettling: ‘Other mathematicians prove what they can,’ she declared; ‘von Neumann proves what he wants.’

Von Neumann had the knack (if we can use so casual a word) of reduced a dizzying variety of seemingly intractable technical dilemmas to problems in logic. In Vienna he learned from David Hilbert how to think systematically about mathematics, using step-by-step, mechanical procedures. Later he used that insight to play midwife to the computer. In between he rendered the new-fangled quantum theory halfway comprehensible (by explaining how Heisenberg’s and Schrödinger’s wildly different quantum models said the same thing); then, at Los Alamos, he helped perfect the atom bomb and co-invented the unimaginably more powerful H-bomb.

He isn’t even dull! The worst you can point to is some mild OCD: Johnny fiddles a bit too long with the light switches. Otherwise — what? He enjoys a drink. He enjoys fast cars. He’s jolly. You can imagine having a drink with him. He’d certainly make you feel comfortable. Here’s Edward Teller in 1966: ‘Von Neumann would carry on a conversation with my three-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us.’

In embarking on his biography of von Neumann, then, Bhattacharya sets himself a considerable challenge: writing about a man who, through crisis after crisis, through stormy intellectual disagreements and amid political controversy, contrived always, for his own sake and others’, to avoid unnecessary drama.

What’s a biographer to do, when part of his subject’s genius is his ability to blend in with his friends, and lead a good life? How to dramatise a man without flaws, who skates through life without any of the personal turmoil that makes for gripping storytelling?

If some lives resist the storyteller’s art, Ananyo Bhattacharya does a cracking job of hiding the fact. He sensibly, and very ably, moves the biographical goal-posts, making this not so much the story of a flesh-and-blood man, more the story of how an intellect evolves, moving as intellects often do (though rarely so spectacularly) from theoretical concerns to applications to philosophy. ‘As he moved from pure mathematics to physics to economics to engineering,’ observed former colleague Freeman Dyson, ‘[Von Neumann] became steadily less deep and steadily more important,’

Von Neumann did not really trust humanity to live up, morally, to its technical capacities. ‘What we are creating now,’ he told his wife, after a sleepless night contemplating an H bomb design, ‘is a monster whose influence is going to change history, provided there is any history left.’ He was a quintessentially European pessimist, forged by years that saw the world he had grown up in being utterly destroyed. It is no fanciful ‘man from the future’, and no mere cynic, who writes, ‘We will be able to go into space way beyond the moon if only people could keep pace with what they create.’

Bhattacharya’s agile, intelligent, intellectually enraptured account of John von Neumann’s life reveals, after all, not “a man from the future”, not a one-dimensional cold-war warrior and for sure not Dr Strangelove (though Peter Sellars nicked his accent). Bhattacharya argues convincingly that Von Neumann was a man in whose extraordinarily fertile head the pre-war world found an all-too-temporary lifeboat.

The Art of Conjecturing

Reading Katy Börner’s Atlas of Forecasts: Modeling and mapping desirable futures for New Scientist, 18 August 2021

My leafy, fairly affluent corner of south London has a traffic congestion problem, and to solve it, there’s a plan to close certain roads. You can imagine the furore: the trunk of every kerbside tree sports a protest sign. How can shutting off roads improve traffic flows?

The German mathematician Dietrich Braess answered this one back in 1968, with a graph that kept track of travel times and densities for each road link, and distinguished between flows that are optimal for all cars, and flows optimised for each individual car.

On a Paradox of Traffic Planning is a fine example of how a mathematical model predicts and resolves a real-world problem.

This and over 1,300 other models, maps and forecasts feature in the references to Katy Börner’s latest atlas, which is the third to be derived from Indiana University’s traveling exhibit Places & Spaces: Mapping Science.

Atlas of Science: Visualizing What We Know (2010) revealed the power of maps in science; Atlas of Knowledge: Anyone Can Map (2015), focused on visualisation. In her third and final foray, Börner is out to show how models, maps and forecasts inform decision-making in education, science, technology, and policymaking. It’s a well-structured, heavyweight argument, supported by descriptions of over 300 model applications.

Some entries, like Bernard H. Porter’s Map of Physics of 1939, earn their place thanks purely to their beauty and for the insights they offer. Mostly, though, Börner chooses models that were applied in practice and made a positive difference.

Her historical range is impressive. We begin at equations (did you know Newton’s law of universal gravitation has been applied to human migration patterns and international trade?) and move through the centuries, tipping a wink to Jacob Bernoulli’s “The Art of Conjecturing” of 1713 (which introduced probability theory) and James Clerk Maxwell’s 1868 paper “On Governors” (an early gesture at cybernetics) until we arrive at our current era of massive computation and ever-more complex model building.

It’s here that interesting questions start to surface. To forecast the behaviour of complex systems, especially those which contain a human component, many current researchers reach for something called “agent-based modeling” (ABM) in which discrete autonomous agents interact with each other and with their common (digitally modelled) environment.

Heady stuff, no doubt. But, says Börner, “ABMs in general have very few analytical tools by which they can be studied, and often no backward sensitivity analysis can be performed because of the large number of parameters and dynamical rules involved.”

In other words, an ABM model offers the researcher an exquisitely detailed forecast, but no clear way of knowing why the model has drawn the conclusions it has — a risky state of affairs, given that all its data is ultimately provided by eccentric, foible-ridden human beings.

Börner’s sumptuous, detailed book tackles issues of error and bias head-on, but she left me tugging at a still bigger problem, represented by those irate protest signs smothering my neighbourhood.

If, over 50 years since the maths was published, reasonably wealthy, mostly well-educated people in comfortable surroundings have remained ignorant of how traffic flows work, what are the chances that the rest of us, industrious and preoccupied as we are, will ever really understand, or trust, all the many other models which increasingly dictate our civic life?

Borner argues that modelling data can counteract misinformation, tribalism, authoritarianism, demonization, and magical thinking.

I can’t for the life of me see how. Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” What happens when a model reaches such complexity, only an expert can really understand it, or when even the expert can’t be entirely sure why the forecast is saying what it’s saying?

We have enough difficulty understanding climate forecasts, let alone explaining them. To apply these technologies to the civic realm begs a host of problems that are nothing to do with the technology, and everything to do with whether anyone will be listening.

Sod provenance

Is the digital revolution that Pixar began with Toy Story stifling art – or saving it? An article for the Telegraph, 24 July 2021

In 2011 the Westfield shopping mall in Stratford, East London, acquired a new public artwork: a digital waterfall by the Shoreditch-based Jason Bruges Studio. The liquid-crystal facets of the 12 metre high sculpture form a subtle semi-random flickering display, as though water were pouring down its sides. Depending on the shopper’s mood, this either slakes their visual appetite, or leaves them gasping for a glimpse of real rocks, real water, real life.

Over its ten-year life, Bruges’s piece has gone from being a comment about natural processes (so soothing, so various, so predictable!) to being a comment about digital images, a nagging reminder that underneath the apparent smoothness of our media lurks the jagged line and the stair-stepped edge, the grid, the square: the pixel, in other words.

We suspect that the digital world is grainier than the real, coarser, more constricted, and stubbornly rectilinear. But this is a prejudice, and one that’s neatly punctured by a new book by electrical engineer and Pixar co-founder Alvy Ray Smith, “A Biography of the Pixel”. This eccentric work traces the intellectual genealogy of Toy Story (Pixar’s first feature-length computer animation in 1995) over bump-maps and around occlusions, along traced rays and through endless samples, computations and transformations, back to the mathematics of the eighteenth century.

Smith’s whig history is a little hard to take — as though, say, Joseph Fourier’s efforts in 1822 to visualise how heat passed through solids were merely a way-station on the way to Buzz Lightyear’s calamitous launch from the banister rail — but it’s a superb short-hand in which to explain the science.

We can use Fourier’s mathematics to record an image as a series of waves. (Visual patterns, patterns of light and shade and movement, “can be represented by the voltage patterns in a machine,” Smith explains.) And we can recreate these waves, and the image they represent, with perfect fidelity, so long as we have a record of the points at the crests and troughs of each wave.

The locations of these high- and low-points, recorded as numerical coordinates, are pixels. (The little dots you see if you stare far too closely at your computer screen are not pixels; strictly speaking, they’re “display elements”.)

Digital media do not cut up the world into little squares. (Only crappy screens do that). They don’t paint by numbers. On the contrary, they faithfully mimic patterns in the real world.

This leads Smith to his wonderfully upside-down-sounding catch-line: “Reality,” he says, ”is just a convenient measure of complexity.”

Once pixels are converted to images on a screen, they can be used to create any world, rooted in any geometry, and obeying any physics. And yet these possibilities remain largely unexplored. Almost every computer animation is shot through a fictitious “camera lens”, faithfully recording a Euclidean landscape. Why are digital animations so conservative?

I think this is the wrong question: its assumptions are faulty. I think the ability to ape reality at such high fidelity creates compelling and radical possibilities of its own.

I discussed some of these possibilities with Paul Franklin, co-founder of the SFX company DNEG, and who won Oscars for his work on Christopher Nolan’s sci-fi blockbusters Interstellar (2014) and Inception (2010). Franklin says the digital technologies appearing on film sets in the past decade — from lighter cameras and cooler lights to 3-D printed props and LED front-projection screens — are positively disrupting the way films are made. They are making film sets creative spaces once again, and giving the director and camera crew more opportunities for on-the-fly creative decision making. “We used a front-projection screen on the film Interstellar, so the actors could see what visual effects they were supposed to be responding to,” he remembers. “The actors loved being able to see the super-massive black hole they were supposed to be hurtling towards. Then we realised that we could capture an image of the rotating black hole’s disc reflecting in Matthew McConaughey’s helmet: now that’s not the sort of shot you plan.”

Now those projection screens are interactive. Franklin explains: “Say I’m looking down a big corridor. As I move the camera across the screen, instead of it flattening off and giving away the fact that it’s actually just a scenic backing, the corridor moves with the correct perspective, creating the illusion of a huge volume of space beyond the screen itself.“

Effects can be added to a shot in real-time, and in full view of cast and crew. More to the point, what the director sees through their viewfinder is what the audience gets. This encourages the sort of disciplined and creative filmmaking Melies and Chaplin would recognise, and spells an end to the deplorable industry habit of kicking important creative decisions into the long grass of post-production.

What’s taking shape here isn’t a “good enough for TV” reality. This is a “good enough to reveal truths” reality. (Gargantua, the spinning black hole at Interstellar’s climax, was calculated and rendered so meticulously, it ended up in a paper for the journal Classical and Quantum Gravity.) In some settings, digital facsimile is becoming, literally, a replacement reality.

In 2012 the EU High Representative Baroness Ashton gave a physical facsimile of the burial chamber of Tutankhamun to the people of Egypt. The digital studio responsible for its creation, Factum Foundation, has been working in the Valley of the Kings since 2001, creating ever-more faithful copies of places that were never meant to be visited. They also print paintings (by Velasquez, by Murillo, by Raphael…) that are indistinguishable from the originals.

From the perspective of this burgeoning replacement reality, much that is currently considered radical in the art world appears no more than a frantic shoring-up of old ideas and exhausted values. A couple of days ago Damien Hirst launched The Currency, a physical set of dot paintings the digitally tokenised images of which can be purchased, traded, and exchanged for the real paintings.

Eventually the purchaser has to choose whether to retain the token, or trade it in for the physical picture. They can’t own both. This, says Hirst, is supposed to challenge the concept of value through money and art. Every participant is confronted with their perception of value, and how it influences their decision.

But hang on: doesn’t money already do this? Isn’t this what money actually is?

It can be no accident that non-fungible tokens (NFTs), which make bits of the internet ownable, have emerged even as the same digital technologies are actually erasing the value of provenance in the real world. There is nothing sillier, or more dated looking, than the Neues Museum’s scan of its iconic bust of Nefertiti, released free to the public after a complex three-year legal battle. It comes complete with a copyright license in the bottom of the bust itself — a copyright claim to the scan of a 3,000-year-old sculpture created 3,000 miles away.

Digital technologies will not destroy art, but they will erode and ultimately extinguish the value of an artwork’s physical provenance. Once facsimiles become indistinguishable from originals, then originals will be considered mere “first editions”.

Of course literature has thrived for many centuries in such an environment; why should the same environment damage art? That would happen only if art had somehow already been reduced to a mere vehicle for financial speculation. As if!

 

How many holes has a straw?

Reading Jordan Ellenberg’s Shape for the Telegraph, 7 July 2021

“One can’t help feeling that, in those opening years of the 1900s, something was in the air,” writes mathematician Jordan Ellenburg.

It’s page 90, and he’s launching into the second act of his dramatic, complex history of geometry (think “History of the World in 100 Shapes”, some of them very screwy indeed).
For page after reassuring page, we’ve been introduced to symmetry, to topology, and to the kinds of notation that make sense of knotty-sounding questions like “how many holes has a straw”?

Now, though, the gloves are off, as Ellenburg records the fin de siecle’s “painful recognition of some unavoidable bubbling randomness at the very bottom of things.”
Normally when sentiments of this sort are trotted out, they’re there to introduce readers to the wild world of quantum mechanics (and, incidentally, we can expect a lot of that sort of thing in the next few years: there’s a centenary looming). Quantum’s got such a grip on our imagination, we tend to forget that it was the johnny-come-lately icing on an already fairly indigestible cake.

A good twenty years before physical reality was shown to be unreliable at small scales, mathematicians were pretzeling our very ideas of space. They had no choice: at the Louisiana Purchase Exposition in 1904, Henri Poincarre, by then the world’s most famous geometer, described how he was trying to keep reality stuck together in light of Maxwell’s famous equations of electromagnetism (Maxwell’s work absolutely refused to play nicely with space). In that talk, he came startlingly close to gazumping Einstein to a theory of relativity.
Also at the same exposition was Sir Ronald Ross, who had discovered that malaria was carried by the bite of the anopheles mosquito. He baffled and disappointed many with his presentation of an entirely mathematical model of disease transmission — the one we use today to predict, well, just about everything, from pandemics to political elections.
It’s hard to imagine two mathematical talks less alike than those of Poincarre and Ross. And yet they had something vital in common: both shook their audiences out of mere three-dimensional thinking.

And thank goodness for it: Ellenburg takes time to explain just how restrictive Euclidean thinking is. For Euclid, the first geometer, living in the 4th century BC, everything was geometry. When he multiplied two numbers, he thought of the result as the area of a rectangle. When he multiplied three numbers, he called the result a “solid’. Euclid’s geometric imagination gave us number theory; but tying mathematical values to physical experience locked him out of more or less everything else. Multiplying four numbers? Now how are you supposed to imagine that in three-dimensional space?

For the longest time, geometry seemed exhausted: a mental gym; sometimes a branch of rhetoric. (There’s a reason Lincoln’s Gettysburg Address characterises the United States as “dedicated to the proposition that all men are created equal”. A proposition is a Euclidean term, meaning a fact that follows logically from self-evident axioms.)

The more dimensions you add, however, the more capable and surprising geometry becomes. And this, thanks to runaway advances in our calculating ability, is why geometry has become our go-to manner of explanation for, well, everything. For games, for example: and extrapolating from games, for the sorts of algorithmical processes we saddle with that profoundly unhelpful label “artificial intelligence” (“artificial alternatives to intelligence” would be better).

All game-playing machines (from the chess player on my phone to DeepMind’s AlphaGo) share the same ghost, the “Markov chain”, formulated by Andrei Markov to map the probabilistic landscape generated by sequences of likely choices. An atheist before the Russian revolution, and treated with predictable shoddiness after it, Markov used his eponymous chain, rhetorically, to strangle religiose notions of free will in their cradle.

From isosceles triangles to free will is quite a leap, and by now you will surely have gathered that Shape is anything but a straight story. That’s the thing about mathematics: it does not advance; it proliferates. It’s the intellectual equivalent of Stephen Leacock’s Lord Ronald, who “flung himself upon his horse and rode madly off in all directions”.

Containing multitudes as he must, Ellenberg’s eyes grow wider and wider, his prose more and more energetic, as he moves from what geometry means to what geometry does in the modern world.

I mean no complaint (quite the contrary, actually) when I say that, by about two-thirds the way in, Ellenberg comes to resemble his friend John Horton Conway. Of this game-playing, toy-building celebrity of the maths world, who died from COVID last year, Ellenburg writes, “He wasn’t being wilfully difficult; it was just the way his mind worked, more associative than deductive. You asked him something and he told you what your question reminded him of.”
This is why Ellenberg took the trouble to draw out a mind map at the start of his book. This and the index offer the interested reader (and who could possibly be left indifferent?) a whole new way (“more associative than deductive”) of re-reading the book. And believe me, you will want to. Writing with passion for a nonmathematical audience, Ellenberg is a popular educator at the top of his game.

Nothing happens without a reason

Reading Journey to the Edge of Reason: The Life of Kurt Gödel by Stephen Budiansky for the Spectator, 29 May 2021

The 20th-century Austrian mathematician Kurt Gödel did his level best to live in the world as his philosophical hero Gottfried Wilhelm Leibnitz imagined it: a place of pre-established harmony, whose patterns are accessible to reason.

It’s an optimistic world, and a theological one: a universe presided over by a God who does not play dice. It’s most decidedly not a 20th-century world, but “in any case”, as Gödel himself once commented, “there is no reason to trust blindly in the spirit of the time.”

His fellow mathematician Paul Erdös was appalled: “You became a mathematician so that people should study you,” he complained, “not that you should study Leibnitz.” But Gödel always did prefer study to self-expression, and is this is chiefly why we know so little about him, and why the spectacular deterioration of his final years — a fantasmagoric tale of imagined conspiracies, strange vapours and shadowy intruders, ending in his self-starvation in 1978 — has come to stand for the whole of his life.

“Nothing, Gödel believed, happened without a reason,” says Stephen Burdiansky. “It was at once an affirmation of ultrarationalism, and a recipe for utter paranoia.”

You need hindsight to see the paranoia waiting to pounce. But the ultrarationalism — that was always tripping him up. There was something worryingly non-stick about him. He didn’t so much resist the spirit of the time as blunder about totally oblivious of it. He barely noticed the Anschluss, barely escaped Vienna as the Nazis assumed control, and, once ensconced at the Institute for Advanced Study at Princeton, barely credited that tragedy was even possible, or that, say, a friend might die in a concentration camp (it took three letters for his mother to convince him).

Many believed that he’d blundered, in a way typical to him, into marriage with his life-long partner, a foot-care specialist and divorcée called Adele Nimbursky. Perhaps he did. But Burdiansky does a spirited job of defending this “uneducated but determined” woman against the sneers of snobs. If anyone kept Gödel rooted to the facts of living, it was Adele. She once stuck a concrete flamingo, painted pink and black, in a flower bed right outside his study window. All evidence suggests he adored it.

Idealistic and dysfunctional, Gödel became, in mathematician Jordan Ellenberg’s phrase, “the romantic’s favourite mathematician”, a reputation cemented by the fact that we knew hardly anything about him. Key personal correspondence was destroyed at his death, while his journals and notebooks — written in Gabelsberger script, a German shorthand that had fallen into disuse by the mid-1920s — resisted all-comers until Cheryl Dawson, wife of the man tasked with sorting through Gödel’s mountain of posthumous papers — learned how to transcribe it all.

Biographer Stephen Budiansky is the first to try to give this pile of new information a human shape, and my guess is it hasn’t been easy.

Burdiansky handles the mathematics very well, capturing the air of scientific optimism that held sway over the intellectual Vienna and induced Germany’s leading mathematician David Hilbert to declare that “in mathematics there is *nothing* unknowable!”

Solving Hilbert’s four “Problems of Laying Foundations for Mathematics” of 1928 was supposed to secure the foundations of mathematics for good, and Gödel, a 22-year-old former physics student, solved one of them. Unfortunately for Hilbert and his disciples, however, Gödel also proved the insolubility of the other three. So much for the idea that all mathematics could be derived from the propositions of logic: Gödel demonstrated that logic itself was flawed.

This discovery didn’t worry Gödel nearly so much as it did his contemporaries. For Gödel, as Burdiansky explains, “Mathematical objects and a priori truth was as real to him as anything the senses could directly perceive.” If our reason failed, well, that was no reason to throw away the world: we would always be able to recognise some truths through intuition that could never be established through computation. That, for Gödel, was the whole point of being human.

It’s one thing to be a Platonist in a world dead set against Platonism, or an idealist in the world that’s gone all-in with materialism. It’s quite another to see acts of sabotage in the errors of TV listings magazines, or political conspiracy in the suicide of King Ludwig II of Bavaria. The Elysian calm and concentration afforded Gödel after the second world war at the Institute of Advanced Study probably did him more harm than good. “Gödel is too alone,” his friend Oskar Morgenstern fretted: “he should be given teaching duties; at least an hour a week.”

In the end, though, neither his friendships nor his marriage nor that ridiculous flamingo could tether to the Earth a man who had always preferred to write for his desk drawer, and Burdiansky, for all his tremendous efforts and exhaustive interrogations of Godel’s times and places, acquaintances and offices, can only leave us, at the end, with an immeasurably enriched version of Gödel the wise child. It’s an undeniably distracting and reductive picture. But — and this is the trouble — it’s not wrong.

What else you got?

Reading Benjamin Labatut’s When We Cease to Understand the World for the Spectator, 14 November 2020

One day someone is going to have to write the definitive study of Wikipedia’s influence on letters. What, after all, are we supposed to make of all these wikinovels? I mean novels that leap from subject to subject, anecdote to anecdote, so that the reader feels as though they are toppling like Alice down a particularly erudite Wikipedia rabbit-hole.

The trouble with writing such a book, in an age of ready internet access, and particularly Wikipedia, is that, however effortless your erudition, no one is any longer going to be particularly impressed by it.

We can all be our own Don DeLillo now; our own W G Sebald. The model for this kind of literary escapade might not even be literary at all; does anyone here remember James Burke’s Connections, a 1978 BBC TV series which took an interdisciplinary approach to the history of science and invention, and demonstrated how various discoveries, scientific achievements, and historical world events were built from one another successively in an interconnected way?

And did anyone notice how I ripped the last 35 words from the show’s Wikipedia entry?

All right, I’m sneering, and I should make clear from the off that When We Cease… is a chilling, gripping, intelligent, deeply humane book. It’s about the limits of human knowledge, and the not-so-very-pleasant premises on which physical reality seems to be built. The author, a Chilean born in Rotterdam in 1980, writes in Spanish. Adrian Nathan West — himself a cracking essayist — fashioned this spiky, pitch-perfect English translation. The book consists, in the main, of four broadly biographical essays. The chemist Franz Haber finds an industrial means of fixing nitrogen, enabling the revolution in food supply that sustains our world, while also pioneering modern chemical warfare. Karl Schwarzchild, imagines the terrible uber-darkness at the heart of a black hole, dies in a toxic first world war and ushers in a thermonuclear second. Alexander Grothendieck is the first of a line of post-war mathematician-paranoiacs convinced they’ve uncovered a universal principle too terrible to discuss in public (and after Oppenheimer, really, who can blame them?) In the longest essay-cum-story, Erwin Schrodinger and Werner Heisenberg slug it out for dominance in a field — quantum physics — increasingly consumed by uncertainty and (as Labatut would have it) dread.

The problem here — if problem it is — is that no connection, in this book of artfully arranged connections, is more than a keypress away from the internet-savvy reader. Wikipedia, twenty years old next year, really has changed our approach to knowledge. There’s nothing aristocratic about erudition now. It is neither a sign of privilege, nor (and this is more disconcerting) is it necessarily a sign of industry. Erudition has become a register, like irony. like sarcasm. like melancholy. It’s become, not the fruit of reading, but a way of perceiving the world.

Literary attempts to harness this great power are sometimes laughable. But this has always been the case for literary innovation. Look at the gothic novel. Fifty odd years before the peerless masterpiece that is Mary Shelley’s Frankenstein we got Horace Walpole’s The Castle of Otranto, which is jolly silly.

Now, a couple of hundred years after Frankenstein was published, “When We Cease to Understand the World” dutifully repeats the rumours (almost certainly put about by the local tourist industry) that the alchemist Johann Conrad Dippel, born outside Darmstadt in the original Burg Frankenstein in 1673, wielded an uncanny literary influence over our Mary. This is one of several dozen anecdotes which Labatut marshals to drive home that message that There Are Things In This World That We Are Not Supposed to Know. It’s artfully done, and chilling in its conviction. Modish, too, in the way it interlaces fact and fiction.

It’s also laughable, and for a couple of reasons. First, it seems a bit cheap of Labatut to treat all science and mathematics as one thing. If you want to build a book around the idea of humanity’s hubris, you can’t just point your finger at “boffins”.

The other problem is Labatut’s mixing of fact and fiction. He’s not out to cozen us. But here and there this reviewer was disconcerted enough to check his facts — and where else but on Wikipedia? I’m not saying Labatut used Wikipedia. (His bibliography lists a handful of third-tier sources including, I was amused to see, W G Sebald.) Nor am I saying that using Wikipedia is a bad thing.

I think, though, that we’re going to have to abandon our reflexive admiration for erudition. It’s always been desperately easy to fake. (John Fowles.) And today, thanks in large part to Wikipedia, it’s not beyond the wit of most of us to actually *acquire*.

All right, Benjamin, you’re erudite. We get it. What else you got?

Know when you’re being played

Calling Bullshit by Jevin D West and Carl T Bergstrom, and Science Fictions by Stuart Ritchie, reviewed for The Telegraph, 8 August 2020

Last week I received a press release headlined “1 in 4 Brits say ‘No’ to Covid vaccine”. This is was such staggeringly bad news, I decided it couldn’t possibly be true. And sure enough, it wasn’t.

Armed with the techniques taught me by biologist Carl Bergstrom and data scientist Jevin West, I “called bullshit” on this unwelcome news, which after all bore all the hallmarks of clickbait.

For a start, the question on which the poll was based was badly phrased. On closer reading it turns out that 25 per cent would decline if the government “made a Covid-19 vaccine available tomorrow”. Frankly, if it was offered *tomorrow* I’d be a refusenik myself. All things being equal, I prefer my medicines tested first.

But what of the real meat of the claim — that daunting figure of “25 per cent”?  It turns out that a sample of 2000 was selected from a sample of 17,000 drawn from the self-selecting community of subscribers to a lottery website. But hush my cynicism: I am assured that the sample of 2000 was “within +/-2% of ONS quotas for Age, Gender, Region, SEG, and 2019 vote, using machine learning”. In other words, some effort has been made to make the sample of 2000 representative of the UK population (but only on five criteria, which is not very impressive. And that whole “+/-2%” business means that up to 40 of the sample weren’t representative of anything).

For this, “machine learning” had to be employed (and, later, “a proprietary machine learning system”)? Well, of course not.  Mention of the miracle that is artificial intelligence is almost always a bit of prestidigitation to veil the poor quality of the original data. And anyway, no amount of “machine learning” can massage away the fact that the sample was too thin to serve the sweeping conclusions drawn from it (“Only 1 in 5 Conservative voters (19.77%) would say No” — it says, to two decimal places, yet!) and is anyway drawn from a non-random population.

Exhausted yet? Then you may well find Calling Bullshit essential reading. Even if you feel you can trudge through verbal bullshit easily enough, this book will give you the tools to swim through numerical snake-oil. And this is important, because numbers easily slip  past the defences we put up against mere words. Bergstrom and West teach a course at the University of Washington from which this book is largely drawn, and hammer this point home in their first lecture: “Words are human constructs,” they say; “Numbers seem to come directly from nature.”

Shake off your naive belief in the truth or naturalness of the numbers quoted in new stories and advertisements, and you’re half way towards knowing when you’re being played.

Say you diligently applied the lessons in Calling Bullshit, and really came to grips with percentages, causality, selection bias and all the rest. You may well discover that you’re now ignoring everything — every bit of health advice, every over-wrought NASA announcement about life on Mars, every economic forecast, every exit poll. Internet pioneer Jaron Lanier reached this point last year when he came up with Ten Arguments for Deleting Your Social Media Accounts Right Now. More recently the best-selling Swiss pundit Rolf Dobelli has ordered us to Stop Reading the News. Both deplore our current economy of attention, which values online engagement over the provision of actual information (as when, for instance, a  review like this one gets headlined “These Two Books About Bad Data Will Break Your Heart”; instead of being told what the piece is about, you’re being sold on the promise of an emotional experience).

Bergstrom and West believe that public education can save us from this torrent of micro-manipulative blither. Their book is a handsome contribution to that effort. We’ve lost Lanier and Dobelli, but maybe the leak can be stopped up. This, essentially, is what the the authors are about; they’re shoring up the Enlightenment ideal of a civic society governed by reason.

Underpinning this ideal is science, and the conviction that the world is assembled on a bedrock of truth fundamentally unassailable truths.

Philosophical nit-picking apart, science undeniably works. But in Science Fictions Stuart Ritchie, a psychologist based at King’s College, shows just how contingent and gimcrack and even shoddy the whole business can get. He has come to praise science, not to bury it; nevertheless, his analyses of science’s current ethical ills — fraud, hype, negligence and so on — are devastating.

The sheer number of problems besetting the scientific endeavour becomes somewhat more manageable once we work out which ills are institutional, which have to do with how scientists communicate, and which are existential problems that are never going away whatever we do.

Our evolved need to express meaning through stories is an existential problem. Without stories, we can do no thinking worth the name, and this means that we are always going to prioritise positive findings over negative ones, and find novelties more charming than rehearsed truths.

Such quirks of the human intellect can be and have been corrected by healthy institutions at least some of the time over the last 400-odd years. But our unruly mental habits run wildly out of control once they are harnessed to a media machine driven by attention.  And the blame for this is not always easily apportioned: “The scenario where an innocent researcher is minding their own business when the media suddenly seizes on one of their findings and blows it out of proportion is not at all the norm,” writes Ritchie.

It’s easy enough to mount a defence of science against the tin-foil-hat brigade, but Ritchie is attempting something much more discomforting: he’s defending science against scientists. Fraudulent and negligent individuals fall under the spotlight occasionally, but institutional flaws are Ritchie’s chief target.

Reading Science Fictions, we see field after field fail to replicate results, correct mistakes, identify the best lines of research, or even begin to recognise talent. In Ritchie’s proffered bag of solutions are desperately needed reforms to the way scientific work is published and cited, and some more controversial ideas about how international mega-collaborations may enable science to catch up on itself and check its own findings effectively (or indeed at all, in the dismal case of economic science).

At best, these books together offer a path back to a civic life based on truth and reason. At worst, they point towards one that’s at least a little bit defended against its own bullshit. Time will tell whether such efforts can genuinely turning the ship around, or are simply here to entertain us with a spot of deckchair juggling. But there’s honest toil here, and a lot of smart thinking with it. Reading both, I was given a fleeting, dizzying reminder of what it once felt like to be a free agent in a factual world.