What about the unknown knowns?

Reading Nate Silver’s On the Edge and The Art of Uncertainty by David Spiegelhalter for the Spectator

The Italian actuary Bruno de Finetti, writing in 1931, was explicit: “Probability,” he wrote, “does not exist.”

Probability, it’s true, is simply the measure of an observer’s uncertainty, and in The Art of Uncertainty British statistician David Spiegelhalter explains how his extraordinary and much-derided science has evolved to the point where it is even able to say useful things about why things have turned out the way they have, based purely on present evidence. Spiegelhalter was a member of the Statistical Expert Group of the 2018 UK Infected Blood Inquiry, and you know his book’s a winner the moment he tells you that between 650 and 3,320 people nationwide died from tainted transfusions. By this late point, along with the pity and the horror, you have a pretty good sense of the labour and ingenuity that went into those peculiarly specific, peculiarly wide-spread numbers.

At the heart of Spiegelhalter’s maze, of course, squats Donald Rumsfeld, once pilloried for his convoluted syntax at a 2002 Department of Defense news briefing, and now immortalised for what came out of it: the best ever description of what it’s like to act under conditions of uncertainty. Rumsfeld’s “unknown unknowns” weren’t the last word, however; Slovenian philosopher Slavoj Žižek (it had to be Žižek) pointed out that there are also “unknown knowns” — “all the unconscious beliefs and prejudices that determine how we perceive reality.”

In statistics, something called Cromwell’s Rule cautions us never to bed absolute certainties (probabilities of 0 or 1) into our models. Still, “unknown knowns” fly easily under the radar, usually in the form of natural language. Spiegelhalter tells how, in 1961, John F Kennedy authorised the invasion of the Bay of Pigs, quite unaware of the minatory statistics underpinning the phrase “fair chance” in an intelligence briefing.

From this, you could draw a questionable moral: that the more we quantify the world, the better our decisions will be. Nate Silver — poker player, political pundit and author of 2012’s The Signal and the Noise — finds much to value in this idea. On the Edge, though, is more about the unforeseen consequences that follow.

There is a sprawling social ecosystem out there that Silver dubs “the River”, which includes “everyone from low-stakes poker pros just trying to grind out a living to crypto kings and adventure-capital billionaires.” On the Edge is, among many other things, a cracking piece of popular anthropology.

Riverians accept that it is very hard to be certain about anything; they abandon certainty for games of chance; and they end up treating everything as a market to be played.

Remember those chippy, cheeky chancers immortalised in films like 21 (2008: MIT’s Blackjack Team takes on Las Vegas) and Moneyball (2011: a young economist up-ends baseball)?

More than a decade has passed, and they’re not buccaneers any more. Today, says Silver, “the Riverian mindset is coming from inside the house.”

You don’t need to be a David Spiegelhalter to be a Riverian. All you need is the willingness to take bets on very long odds.

Professional gamblers learn when and how how to do this, and this is why that subset of gamblers called Silicon Valley venture capitalists are willing to back wilful contrarians like Elon Musk (on a good day) and (on a bad day) Ponzi-scheming crypto-crooks like Sam Bankman-Fried.

Success as a Riverian isn’t guaranteed. As Silver points out, “a lot of the people who play poker for a living would be better off — at least financially — doing something else.” Then again, those who make it in the VC game expect to double their money every four years. And those who find they’ve backed a Google or a SpaceX can find themselves living in a very odd world indeed.

Recently the billionaire set has been taking an interest and investing in “effective altruism”, a hyper-utilitarian dish cooked up by Oxford philosopher Will MacAskill. “EA” promises to multiply the effectiveness of acts of charity by studying their long-term effectiveness — a approach that naturally appeals to minds focused on quantification. Silver describes the state of the current movement, “stuck in the uncanny valley between being abstractly principled and ruthlessly pragmatic, with the sense that you can kind of make it up as you go along”. Who here didn’t see that one coming? Most of the original EA set now spend their time agonising over the apocalyptic potential of artificial intelligence.

The trick to Riverian thinking is to decouple things, in order to measure their value. Rather than say, “The Chick-fil-A CEO’s views on gay marriage have put me off my lunch,” you say, “the CEO’s views are off-putting, but this is a damn fine sandwich — I’ll invest.”

That such pragmatism might occasionally ding your reputation, we’ll take as read. But what happens when you do the opposite, glomming context after context onto every phenomenon in pursuit of some higher truth? Soon everything becomes morally equivalent to everything else and thinking becomes impossible.

Silver mentions a December 2023 congressional hearing in which the tone-deaf presidents of Harvard, Penn and MIT, in their sophomoric efforts to be right about all things all at once all the time, managed to argue their way into anti-Semitism. (It’s on YouTube if you haven’t seen it already. The only thing I can compare it to is how The Fast Show’s unlucky Alf used to totter invariably toward the street’s only open manhole.) No wonder that the left-leaning, non-Riverian establishment in politics and education is becoming, in Silver’s words, “a small island threatened by a rising tide of disapproval.”

We’d be foolish in the extreme to throw in our lot with the Riverians, though: people whose economic model reduces to: Bet long odds on the hobby-horses of contrarian asshats and never mind what gets broken in the process.

If we want a fairer, more equally apportioned world, these books should convince us that we should be spending less time worrying about what people are thinking, and concern ourselves more with how people are thinking.

We cannot afford to be ridden by unknown knowns.

 

Geometry’s sweet spot

Reading Love Triangle by Matt Parker for the Telegraph

“These are small,” says Father Ted in the eponymous sitcom, and he holds up a pair of toy cows. “But the ones out there,” he explains to Father Dougal, pointing out the window, “are far away.”

It may not sound like much of a compliment to say that Matt Parker’s new popular mathematics book made me feel like Dougal, but fans of Graham Linehan’s masterpiece will understand. I mean that I felt very well looked after, and, in all my ignorance, handled with a saint-like patience.

Calculating the size of an object from its spatial position has tried finer minds than Dougal’s. A long virtuoso passage early on in Love Triangle enumerates the half-dozen stages of inductive reasoning required to establish the distance of the largest object in the universe — a feature within the cosmic web of galaxies called The Giant Ring. Over nine billion light years away, the Giant Ring still occupies 34.5 degrees of the sky: now that’s what I call big and far away.

Measuring it has been no easy task, and yet the first, foundational step in the calculation turns out to be something as simple as triangulating the length of a piece of road.

“Love Triangle”, as no one will be surprised to learn, is about triangles. Triangles were invented (just go along with me here) in ancient Egypt, where the regularly flooding river Nile obliterated boundary markers for miles around and made rural land disputes a tiresome inevitability. Geometry, says the historian Herodotus around 430 BC, was invented to calculate the exact size of a plot of land. We’ve no reason to disbelieve him.

Parker spends a good amount of time demonstrating the practical usefulness of basic geometry, that allows us to extract the shape and volume of triangular space from a single angle and the length of a single side. At one point, on a visit to Tokyo, he uses a transparent ruler and a tourist map to calculate the height of the city’s tallest tower, the SkyTree.

Having shown triangles performing everyday miracles, he then tucks into their secret: “Triangles,” he explains, “are in the sweet spot of having enough sides to be a physical shape, while still having enough limitations that we can say generalised and meaningful things about them.” Shapes with more sides get boring really quickly, not least because they become so unwieldy in higher dimensions, which is where so many of the joys of real mathematics reside.

Adding dimensions to triangles adds just one corner per dimension. A square, on the other hand, explodes, doubling its number of corners with each dimension. (A cube has eight.) This makes triangles the go-to shape for anyone who wants to assemble meshes in higher dimensions. All sorts of complicated paths are brought within computational reach, making possible all manner of civilisational triumphs, including (but not limited to) photorealistic animations.

So many problems can be cracked by reducing them to triangles, there is an entire mathematical discipline, trigonometry, concerned with the relationships between their angles and side lengths. Parker’s adventures on the spplied side of trigonometry become, of necessity, something of a blooming, buzzing confusion, but his anecdotes are well judged and lead the reader seamlessly into quite complex territory. Ever wanted to know how Kathleen Lonsdale applied Fourier transforms to X-ray waves, making possible Rosalind Franklin’s work on DNA structure? Parker starts us off on that journey by wrapping a bit of paper around a cucumber and cutting it at a slant. Half a dozen pages later, we may not have the firmest grasp of what Parker calls the most incredible bit of maths most people have never heard of, but we do have a clear map of what we do not know.

Whether Parker’s garrulousness charms you or grates on you will be a matter of taste. I have a pious aversion to writers who feel the need to cheer their readers through complex material every five minutes. But it’s hard not to tap your foot to cheap music, and what could be cheaper than Parker’s assertion that introducing coordinates early on in a maths lesson “could be considered ‘putting Descartes before the course’”?

Parker has a fine old time with his material, and only a curmudgeon can fail to be charmed by his willingness to call Heron’s two-thousand-year-old formula for finding the area of a triangle “stupid” (he’s not wrong, neither) and the elongated pentagonal gyrocupolarotunda a “dumb shape”.

“These confounded dials…”

Reading The Seven Measures of the World by Piero Martin and Four Ways of Thinking by David Sumpter, for New Scientist, 23 October 2023

Blame the sundial. A dinner guest in a play by the Roman writer Plautus, his stomach rumbling, complains that

“The town’s so full of these confounded dials
The greatest part of the inhabitants,
Shrunk up with hunger, crawl along the streets”

We’ve been slaves to number ever since. Not that we need complain, according to two recent books. Piero Martin’s spirited and fascinating The Seven Measures of the World traces our ever-more precise grasp of physical reality, while Four Ways of Thinking, by the Uppsala-based mathematician David Sumpter, shows number illuminating human complexities.

Martin’s stories about common units of measure (candelas and moles rub shoulders here with amperes and degrees Kelvin) tip their hats to the past. The Plautus quotation is Martin’s, as is the assertion (very welcome to this amateur pianist) that the unplayable tempo Beethoven set for his “Hammerklavier” sonata (138 beats per minute!) was caused by a broken metronome.

Martin’s greater purpose is to trace, in the way we measure our metres and minutes, kilogrammes and candelas, the outline of “a true Copernican revolution”.

In the past fundamental constants were determined with reference to material prototypes. In November 2018 it was decided to define international units of measure in reference to the constants themselves. The metre is now defined indirectly using the length of a second as measured by atomic clocks, while the definition of a kilogramme is defined as a function of two physical constants, the speed of light, c, and Planck’s constant, h. The dizzying “hows” of this revolution beg not a few “whys”, but Martin is here to explain why such eye-watering accuracy is vital to the running of our world.

Sumpter’s Four Ways of Thinking is more speculative, organising reality around the four classes of phenomena defined by mathematician Stephen Wolfram’s little-read 1,192-page opus from 2002, A New Kind of Science. Sumpter is quick to reassure us that that his homage to the eccentric and polymathic Wolfram is not so much “a new kind of science” as “a new way to convince your friends to go jogging with you” or perhaps “a new way of controlling chocolate cake addiction.”

The point is, all phenomena are mathematically speaking, either stable, periodic, chaotic, or complex. Learn the differences between these phenomena, and you are half way to better understanding your own life.

Much of Four Ways is assembled semi-novelistically around a summer school in complex systems that Sumpter attended at the Santa Fe Institute in 1997. His half-remembered, half-invented mathematical conversations with fellow attendees won me over, though I have a strong aversion to exposition through dialogue.

I incline to think Sumpter’s biographical sketches are stronger. The strengths and weaknesses of statistical thinking are explored through the life of Ronald Fisher, the unlovely genius who in the 1940s a almost single-handedly created the foundations for statistical science.

That the world does not stand still to be measured, and is often best considered a dynamical system, is an insight given to Alfred Lotka, the chemist who in the first half of the 20th century came tantalisingly close to formulating systems biology.

Chaotic phenomena are caught in a sort of negative image through the work of NASA software engineer Margaret Hamilton, whose determination never to make a mistake — indeed, to make mistakes in her code impossible — landed the crew of Apollo 11 on the Moon.

Soviet mathematician Andrej Kolmogorov personifies complex thinking, as he abandons the axiom-based approach to mathematics and starts to think in terms of information and computer code.

Can mathematics really elucidate life? Do we really need mathematical thinking to realise that “each of us follows our individual rules of interaction and out of that emerges the complexity of our society”? Maybe not. But the journey was gripping.

 

 

Ideas are like boomerangs

Reading In a Flight of Starlings: The Wonder of Complex Systems by Giorgio Parisi for The Telegraph, 1 July 2023

“Researchers,” writes Giorgio Parisi, recipient of the 2021 Nobel Prize in Physics, “often pass by great discoveries without being able to grasp them.” A friend’s grandfather identified and then ignored a mould that killed bacteria, and so missed out on the discovery of penicillin. This story was told to Parisi in an attempt to comfort him for the morning in 1970 he’d spent with another hot-shot physicist, Gerard ‘t Hooft, dancing around what in hindsight was a perfectly obvious application of some particle accelerator findings. Having teetered on the edges of quantum chromodynamics, they walked on by; decades would pass before either man got another stab at the Nobel. “Ideas are often like boomerangs,” Parisi explains, and you can hear the sigh in his voice; “they start out moving in one direction but end up going in another.”

In a Flight of Starlings is the latest addition to an evergreen genre: the scientific confessional. Read this, and you will get at least a frisson of what a top-flight career in physics might feel like.

There’s much here that is charming and comfortable: an eminent man sharing tales of a bygone era. Parisi began his first year of undergraduate physics in November 1966 at Sapienza University in Rome, when computer analysis involved lugging about (and sometimes dropping) metre-long drawers of punched cards.

The book’s title refers to Parisi’s efforts to compute the murmurations of starlings. Recently he’s been trying to work out how many solid spheres of different sizes will fit into a box. There’s a goofiness to these pet projects that belies their significance. The techniques developed to follow thousands of starlings through three dimensions of space and one of time bear a close resemblance to those used to solve statistical physics problems. And fitting marbles in a box? That’s a classic problem in information theory.

The implications of Parisi’s work emerge slowly. The reader, who might, in all honesty, be touched now and again by boredom, sits up straighter once the threads begin to braid.

Physics for the longest time could not handle complexity. Galileo’s model of the physical world did not include friction, not because friction was any sort of mystery, but because the mathematics of his day couldn’t handle it.

Armed with better mathematics and computational tools physics can now study phenomena that Galileo could never have imagined would be part of physics. For instance, friction. For instance, the melting of ice, and the boiling of water: phenomena that, from the point of view of physics, are very strange indeed. Coming up with models that explain the phase transitions of more complex and disordered materials, such as glass and pitch, is something Parisi has been working on, on and off, since the middle of the 1990s.

Efforts to model more and more of the world are nothing new, but once rare successes now tumble in upon the field at a dizzying rate; almost as though physics has undergone its own phase transition. This, Parisi says, is because once two systems in different fields of physics can be described by the same mathematical structure, “a rapid advancement of knowledge takes place in which the two fields cross-fertilize.”

This has clearly happened in Parisi’s own specialism. The mathematics of disorder apply whether you’re describing why some particles try to spin in opposite directions, or why certain people sell shares that others are buying, or what happens when some dinner guests want to sit as far away from other guests as possible.

Phase transitions eloquently connect the visible and quantum worlds. Not that such connections are particularly hard to make. Once you know the physics, quantum phenomena are easy to spot. Ever wondered at a rainbow?

“Much becomes obvious in hindsight,” Parisi writes. “Yet it is striking how in both physics and mathematics there is a lack of proportion between the effort needed to understand something for the first time and the simplicity and naturalness of the solution once all the required stages have been completed.”

The striking “murmurations” of airborne starlings are created when each bird in the flock pays attention to the movements of its nearest neighbour. Obvious, no?

But as Parisi in his charming way makes clear, whenever something in this world seems obvious to us, it is likely because we are perched, knowingly or not, on the shoulders of giants.

A pile of dough

Reading Is Maths Real? by Eugenia Cheng, 17 May 2023

Let’s start with an obvious trick question: why does 1 plus 1 equal 2? Well, it often doesn’t. Add one pile of dough to one pile of dough and you get, well, one pile of dough.

This looks like a twisty and trivial point, but it isn’t. Mathematics describes the logical operations of logical worlds, but you can dream up any number of those, and you’re going to need many more than one of them to even come close to modelling the real world.

“Deep down,” writes mathematician Eugenia Cheng, “maths isn’t about clear answers, but about increasingly nuanced worlds in which we can explore different things being true.”

Cheng wants the reader to ask again all those “stupid” questions they asked about mathematics as kids, and so discover what it feels like to be a real mathematician. Sure enough, mathematicians turn out to be human beings, haunted by doubts, saddled with faulty memories, blessed with unsuspected resources of intuition, guided by imagination. Mathematics is a human pursuit, depicted here from the inside.

We begin in the one-dimensional world of real numbers, and learn in what kinds of worlds numbers can be added together in any order (“commutativity”) and operations grouped together any-old-how (“associativity”). Imaginary numbers (which can’t be expressed as digits; think pi) add a second dimension to our mathematical world, and sure enough there are now patterns we can see that we couldn’t see before, “when we were all squashed into one dimension”.

Keep adding dimensions. (The more we add to our mathematical universe, however, the less we can rely on our visual imagination, and the more we come to rely on algebra.) Complex numbers (which have a real part and an imaginary part) give us the field of complex analysis, on which modern physics depends.

And we don’t stop there. Cheng’s object is not to teach us maths, but to show us what we don’t know; we eventually arrive at a terrific description of mathematical braids in higher dimensions that at the very least we might find interesting, even if we don’t understand it. This is the generous impulse driving this book, and it’s splendidly realised.

Alas, Is Maths Real?, not content with being a book about what it is like to be a mathematician, also wants to be a book about what it is like to be Eugenia Cheng, and success, in this respect, leads to embarrassment.

We’ll start with the trivia and work up.

There’s Cheng’s inner policeman, reminding her, as she discusses the role of pictures in mathematics “to acknowledge that this is thus arguably ableist and excludes those who can’t see.”

There are narcissistic exclamations that defy parody, as when Cheng explains that “the only thing I want everyone to care about is reducing human suffering, violence, hunger, prejudice, exclusion and heartbreak.” (Good to know.)

There are the Soviet-style political analogies for everything. Imaginary and complex numbers took a while to be accepted as numbers because, well, you know people: “some people lag behind, perhaps accepting women and black people but not gay people, or maybe accepting gay, lesbian and bisexual people but not transgender people.”

A generous reader may simply write these irritations off, but then Cheng’s desire to smash patriarchal power structures with the righteous hammer of ethnomathematics (which looks for “other types of mathematics” overlooked, undervalued or suppressed by the colonialist mainstream) tips her into some depressingly hackneyed nonsense. “Contemporary culture,” she tells us, “is still baffled by how ancient cultures were able to do things like build Stonehenge or construct the pyramids.”

Really? The last time I looked, the answers were (a) barges and (b) organised labour.

Cheng tells us she is often asked how she comes up with explanations and diagrams that bring clarity “to various sensitive, delicate, nuanced and convoluted social arguments.” Her training in the discipline of abstract mathematics, she explains, “makes those things come to me very smoothly.”

How smoothly? Well, quite early in the book, “intolerance of intolerance” becomes “tolerance” through a simple mathematical operation — a pratfall in ethics that makes you wonder what kind of world Cheng lives in. Cheng’s abstract mathematics may well be able solve her real-world problems — but I suspect most other people’s worlds feel a deal less tractable.

“Von Neumann proves what he wants”

Reading Ananyo Bhattacharya’s The Man from the Future for The Telegraph, 7 November 2021

Neumann János Lajos, born in Budapest in 1903 to a wealthy Jewish family, negotiated some of the most lethal traps set by the twentieth century, and did so with breathtaking grace. Not even a painful divorce could dent his reputation for charm, reliability and kindness.

A mathematician with a vise-like memory, he survived, and saved others, from the rise of Nazism. He left Austria and joined Princeton’s Institute of Advanced Study when he was just 29. He worked on ballistics in Second World War, atom and hydrogen bombs in Cold War. Disturbed yet undaunted by the prospect of nuclear armageddon, he still found time to develop game theory, to rubbish economics, and to establish artificial intelligence as a legitimate discipline.

He died plain ‘Johnny von Neumman’, in 1957, at the Walter Reed Army Medical Center in Washington, surrounded by heavy security in case, in his final delirium, he spilled any state secrets.

Following John Von Neumann’s life is rather like playing chess against a computer: he has all the best moves already figured out. ‘A time traveller,’ Ananyo Bhattacharya calls him, ‘quietly seeding ideas that he knew would be needed to shape the Earth’s future.’ Mathematician Rózsa Péter’s assessment of von Neumann’s powers is even more unsettling: ‘Other mathematicians prove what they can,’ she declared; ‘von Neumann proves what he wants.’

Von Neumann had the knack (if we can use so casual a word) of reduced a dizzying variety of seemingly intractable technical dilemmas to problems in logic. In Vienna he learned from David Hilbert how to think systematically about mathematics, using step-by-step, mechanical procedures. Later he used that insight to play midwife to the computer. In between he rendered the new-fangled quantum theory halfway comprehensible (by explaining how Heisenberg’s and Schrödinger’s wildly different quantum models said the same thing); then, at Los Alamos, he helped perfect the atom bomb and co-invented the unimaginably more powerful H-bomb.

He isn’t even dull! The worst you can point to is some mild OCD: Johnny fiddles a bit too long with the light switches. Otherwise — what? He enjoys a drink. He enjoys fast cars. He’s jolly. You can imagine having a drink with him. He’d certainly make you feel comfortable. Here’s Edward Teller in 1966: ‘Von Neumann would carry on a conversation with my three-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us.’

In embarking on his biography of von Neumann, then, Bhattacharya sets himself a considerable challenge: writing about a man who, through crisis after crisis, through stormy intellectual disagreements and amid political controversy, contrived always, for his own sake and others’, to avoid unnecessary drama.

What’s a biographer to do, when part of his subject’s genius is his ability to blend in with his friends, and lead a good life? How to dramatise a man without flaws, who skates through life without any of the personal turmoil that makes for gripping storytelling?

If some lives resist the storyteller’s art, Ananyo Bhattacharya does a cracking job of hiding the fact. He sensibly, and very ably, moves the biographical goal-posts, making this not so much the story of a flesh-and-blood man, more the story of how an intellect evolves, moving as intellects often do (though rarely so spectacularly) from theoretical concerns to applications to philosophy. ‘As he moved from pure mathematics to physics to economics to engineering,’ observed former colleague Freeman Dyson, ‘[Von Neumann] became steadily less deep and steadily more important,’

Von Neumann did not really trust humanity to live up, morally, to its technical capacities. ‘What we are creating now,’ he told his wife, after a sleepless night contemplating an H bomb design, ‘is a monster whose influence is going to change history, provided there is any history left.’ He was a quintessentially European pessimist, forged by years that saw the world he had grown up in being utterly destroyed. It is no fanciful ‘man from the future’, and no mere cynic, who writes, ‘We will be able to go into space way beyond the moon if only people could keep pace with what they create.’

Bhattacharya’s agile, intelligent, intellectually enraptured account of John von Neumann’s life reveals, after all, not “a man from the future”, not a one-dimensional cold-war warrior and for sure not Dr Strangelove (though Peter Sellars nicked his accent). Bhattacharya argues convincingly that Von Neumann was a man in whose extraordinarily fertile head the pre-war world found an all-too-temporary lifeboat.

The Art of Conjecturing

Reading Katy Börner’s Atlas of Forecasts: Modeling and mapping desirable futures for New Scientist, 18 August 2021

My leafy, fairly affluent corner of south London has a traffic congestion problem, and to solve it, there’s a plan to close certain roads. You can imagine the furore: the trunk of every kerbside tree sports a protest sign. How can shutting off roads improve traffic flows?

The German mathematician Dietrich Braess answered this one back in 1968, with a graph that kept track of travel times and densities for each road link, and distinguished between flows that are optimal for all cars, and flows optimised for each individual car.

On a Paradox of Traffic Planning is a fine example of how a mathematical model predicts and resolves a real-world problem.

This and over 1,300 other models, maps and forecasts feature in the references to Katy Börner’s latest atlas, which is the third to be derived from Indiana University’s traveling exhibit Places & Spaces: Mapping Science.

Atlas of Science: Visualizing What We Know (2010) revealed the power of maps in science; Atlas of Knowledge: Anyone Can Map (2015), focused on visualisation. In her third and final foray, Börner is out to show how models, maps and forecasts inform decision-making in education, science, technology, and policymaking. It’s a well-structured, heavyweight argument, supported by descriptions of over 300 model applications.

Some entries, like Bernard H. Porter’s Map of Physics of 1939, earn their place thanks purely to their beauty and for the insights they offer. Mostly, though, Börner chooses models that were applied in practice and made a positive difference.

Her historical range is impressive. We begin at equations (did you know Newton’s law of universal gravitation has been applied to human migration patterns and international trade?) and move through the centuries, tipping a wink to Jacob Bernoulli’s “The Art of Conjecturing” of 1713 (which introduced probability theory) and James Clerk Maxwell’s 1868 paper “On Governors” (an early gesture at cybernetics) until we arrive at our current era of massive computation and ever-more complex model building.

It’s here that interesting questions start to surface. To forecast the behaviour of complex systems, especially those which contain a human component, many current researchers reach for something called “agent-based modeling” (ABM) in which discrete autonomous agents interact with each other and with their common (digitally modelled) environment.

Heady stuff, no doubt. But, says Börner, “ABMs in general have very few analytical tools by which they can be studied, and often no backward sensitivity analysis can be performed because of the large number of parameters and dynamical rules involved.”

In other words, an ABM model offers the researcher an exquisitely detailed forecast, but no clear way of knowing why the model has drawn the conclusions it has — a risky state of affairs, given that all its data is ultimately provided by eccentric, foible-ridden human beings.

Börner’s sumptuous, detailed book tackles issues of error and bias head-on, but she left me tugging at a still bigger problem, represented by those irate protest signs smothering my neighbourhood.

If, over 50 years since the maths was published, reasonably wealthy, mostly well-educated people in comfortable surroundings have remained ignorant of how traffic flows work, what are the chances that the rest of us, industrious and preoccupied as we are, will ever really understand, or trust, all the many other models which increasingly dictate our civic life?

Borner argues that modelling data can counteract misinformation, tribalism, authoritarianism, demonization, and magical thinking.

I can’t for the life of me see how. Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” What happens when a model reaches such complexity, only an expert can really understand it, or when even the expert can’t be entirely sure why the forecast is saying what it’s saying?

We have enough difficulty understanding climate forecasts, let alone explaining them. To apply these technologies to the civic realm begs a host of problems that are nothing to do with the technology, and everything to do with whether anyone will be listening.

Sod provenance

Is the digital revolution that Pixar began with Toy Story stifling art – or saving it? An article for the Telegraph, 24 July 2021

In 2011 the Westfield shopping mall in Stratford, East London, acquired a new public artwork: a digital waterfall by the Shoreditch-based Jason Bruges Studio. The liquid-crystal facets of the 12 metre high sculpture form a subtle semi-random flickering display, as though water were pouring down its sides. Depending on the shopper’s mood, this either slakes their visual appetite, or leaves them gasping for a glimpse of real rocks, real water, real life.

Over its ten-year life, Bruges’s piece has gone from being a comment about natural processes (so soothing, so various, so predictable!) to being a comment about digital images, a nagging reminder that underneath the apparent smoothness of our media lurks the jagged line and the stair-stepped edge, the grid, the square: the pixel, in other words.

We suspect that the digital world is grainier than the real, coarser, more constricted, and stubbornly rectilinear. But this is a prejudice, and one that’s neatly punctured by a new book by electrical engineer and Pixar co-founder Alvy Ray Smith, “A Biography of the Pixel”. This eccentric work traces the intellectual genealogy of Toy Story (Pixar’s first feature-length computer animation in 1995) over bump-maps and around occlusions, along traced rays and through endless samples, computations and transformations, back to the mathematics of the eighteenth century.

Smith’s whig history is a little hard to take — as though, say, Joseph Fourier’s efforts in 1822 to visualise how heat passed through solids were merely a way-station on the way to Buzz Lightyear’s calamitous launch from the banister rail — but it’s a superb short-hand in which to explain the science.

We can use Fourier’s mathematics to record an image as a series of waves. (Visual patterns, patterns of light and shade and movement, “can be represented by the voltage patterns in a machine,” Smith explains.) And we can recreate these waves, and the image they represent, with perfect fidelity, so long as we have a record of the points at the crests and troughs of each wave.

The locations of these high- and low-points, recorded as numerical coordinates, are pixels. (The little dots you see if you stare far too closely at your computer screen are not pixels; strictly speaking, they’re “display elements”.)

Digital media do not cut up the world into little squares. (Only crappy screens do that). They don’t paint by numbers. On the contrary, they faithfully mimic patterns in the real world.

This leads Smith to his wonderfully upside-down-sounding catch-line: “Reality,” he says, ”is just a convenient measure of complexity.”

Once pixels are converted to images on a screen, they can be used to create any world, rooted in any geometry, and obeying any physics. And yet these possibilities remain largely unexplored. Almost every computer animation is shot through a fictitious “camera lens”, faithfully recording a Euclidean landscape. Why are digital animations so conservative?

I think this is the wrong question: its assumptions are faulty. I think the ability to ape reality at such high fidelity creates compelling and radical possibilities of its own.

I discussed some of these possibilities with Paul Franklin, co-founder of the SFX company DNEG, and who won Oscars for his work on Christopher Nolan’s sci-fi blockbusters Interstellar (2014) and Inception (2010). Franklin says the digital technologies appearing on film sets in the past decade — from lighter cameras and cooler lights to 3-D printed props and LED front-projection screens — are positively disrupting the way films are made. They are making film sets creative spaces once again, and giving the director and camera crew more opportunities for on-the-fly creative decision making. “We used a front-projection screen on the film Interstellar, so the actors could see what visual effects they were supposed to be responding to,” he remembers. “The actors loved being able to see the super-massive black hole they were supposed to be hurtling towards. Then we realised that we could capture an image of the rotating black hole’s disc reflecting in Matthew McConaughey’s helmet: now that’s not the sort of shot you plan.”

Now those projection screens are interactive. Franklin explains: “Say I’m looking down a big corridor. As I move the camera across the screen, instead of it flattening off and giving away the fact that it’s actually just a scenic backing, the corridor moves with the correct perspective, creating the illusion of a huge volume of space beyond the screen itself.“

Effects can be added to a shot in real-time, and in full view of cast and crew. More to the point, what the director sees through their viewfinder is what the audience gets. This encourages the sort of disciplined and creative filmmaking Melies and Chaplin would recognise, and spells an end to the deplorable industry habit of kicking important creative decisions into the long grass of post-production.

What’s taking shape here isn’t a “good enough for TV” reality. This is a “good enough to reveal truths” reality. (Gargantua, the spinning black hole at Interstellar’s climax, was calculated and rendered so meticulously, it ended up in a paper for the journal Classical and Quantum Gravity.) In some settings, digital facsimile is becoming, literally, a replacement reality.

In 2012 the EU High Representative Baroness Ashton gave a physical facsimile of the burial chamber of Tutankhamun to the people of Egypt. The digital studio responsible for its creation, Factum Foundation, has been working in the Valley of the Kings since 2001, creating ever-more faithful copies of places that were never meant to be visited. They also print paintings (by Velasquez, by Murillo, by Raphael…) that are indistinguishable from the originals.

From the perspective of this burgeoning replacement reality, much that is currently considered radical in the art world appears no more than a frantic shoring-up of old ideas and exhausted values. A couple of days ago Damien Hirst launched The Currency, a physical set of dot paintings the digitally tokenised images of which can be purchased, traded, and exchanged for the real paintings.

Eventually the purchaser has to choose whether to retain the token, or trade it in for the physical picture. They can’t own both. This, says Hirst, is supposed to challenge the concept of value through money and art. Every participant is confronted with their perception of value, and how it influences their decision.

But hang on: doesn’t money already do this? Isn’t this what money actually is?

It can be no accident that non-fungible tokens (NFTs), which make bits of the internet ownable, have emerged even as the same digital technologies are actually erasing the value of provenance in the real world. There is nothing sillier, or more dated looking, than the Neues Museum’s scan of its iconic bust of Nefertiti, released free to the public after a complex three-year legal battle. It comes complete with a copyright license in the bottom of the bust itself — a copyright claim to the scan of a 3,000-year-old sculpture created 3,000 miles away.

Digital technologies will not destroy art, but they will erode and ultimately extinguish the value of an artwork’s physical provenance. Once facsimiles become indistinguishable from originals, then originals will be considered mere “first editions”.

Of course literature has thrived for many centuries in such an environment; why should the same environment damage art? That would happen only if art had somehow already been reduced to a mere vehicle for financial speculation. As if!

 

How many holes has a straw?

Reading Jordan Ellenberg’s Shape for the Telegraph, 7 July 2021

“One can’t help feeling that, in those opening years of the 1900s, something was in the air,” writes mathematician Jordan Ellenburg.

It’s page 90, and he’s launching into the second act of his dramatic, complex history of geometry (think “History of the World in 100 Shapes”, some of them very screwy indeed).
For page after reassuring page, we’ve been introduced to symmetry, to topology, and to the kinds of notation that make sense of knotty-sounding questions like “how many holes has a straw”?

Now, though, the gloves are off, as Ellenburg records the fin de siecle’s “painful recognition of some unavoidable bubbling randomness at the very bottom of things.”
Normally when sentiments of this sort are trotted out, they’re there to introduce readers to the wild world of quantum mechanics (and, incidentally, we can expect a lot of that sort of thing in the next few years: there’s a centenary looming). Quantum’s got such a grip on our imagination, we tend to forget that it was the johnny-come-lately icing on an already fairly indigestible cake.

A good twenty years before physical reality was shown to be unreliable at small scales, mathematicians were pretzeling our very ideas of space. They had no choice: at the Louisiana Purchase Exposition in 1904, Henri Poincarre, by then the world’s most famous geometer, described how he was trying to keep reality stuck together in light of Maxwell’s famous equations of electromagnetism (Maxwell’s work absolutely refused to play nicely with space). In that talk, he came startlingly close to gazumping Einstein to a theory of relativity.
Also at the same exposition was Sir Ronald Ross, who had discovered that malaria was carried by the bite of the anopheles mosquito. He baffled and disappointed many with his presentation of an entirely mathematical model of disease transmission — the one we use today to predict, well, just about everything, from pandemics to political elections.
It’s hard to imagine two mathematical talks less alike than those of Poincarre and Ross. And yet they had something vital in common: both shook their audiences out of mere three-dimensional thinking.

And thank goodness for it: Ellenburg takes time to explain just how restrictive Euclidean thinking is. For Euclid, the first geometer, living in the 4th century BC, everything was geometry. When he multiplied two numbers, he thought of the result as the area of a rectangle. When he multiplied three numbers, he called the result a “solid’. Euclid’s geometric imagination gave us number theory; but tying mathematical values to physical experience locked him out of more or less everything else. Multiplying four numbers? Now how are you supposed to imagine that in three-dimensional space?

For the longest time, geometry seemed exhausted: a mental gym; sometimes a branch of rhetoric. (There’s a reason Lincoln’s Gettysburg Address characterises the United States as “dedicated to the proposition that all men are created equal”. A proposition is a Euclidean term, meaning a fact that follows logically from self-evident axioms.)

The more dimensions you add, however, the more capable and surprising geometry becomes. And this, thanks to runaway advances in our calculating ability, is why geometry has become our go-to manner of explanation for, well, everything. For games, for example: and extrapolating from games, for the sorts of algorithmical processes we saddle with that profoundly unhelpful label “artificial intelligence” (“artificial alternatives to intelligence” would be better).

All game-playing machines (from the chess player on my phone to DeepMind’s AlphaGo) share the same ghost, the “Markov chain”, formulated by Andrei Markov to map the probabilistic landscape generated by sequences of likely choices. An atheist before the Russian revolution, and treated with predictable shoddiness after it, Markov used his eponymous chain, rhetorically, to strangle religiose notions of free will in their cradle.

From isosceles triangles to free will is quite a leap, and by now you will surely have gathered that Shape is anything but a straight story. That’s the thing about mathematics: it does not advance; it proliferates. It’s the intellectual equivalent of Stephen Leacock’s Lord Ronald, who “flung himself upon his horse and rode madly off in all directions”.

Containing multitudes as he must, Ellenberg’s eyes grow wider and wider, his prose more and more energetic, as he moves from what geometry means to what geometry does in the modern world.

I mean no complaint (quite the contrary, actually) when I say that, by about two-thirds the way in, Ellenberg comes to resemble his friend John Horton Conway. Of this game-playing, toy-building celebrity of the maths world, who died from COVID last year, Ellenburg writes, “He wasn’t being wilfully difficult; it was just the way his mind worked, more associative than deductive. You asked him something and he told you what your question reminded him of.”
This is why Ellenberg took the trouble to draw out a mind map at the start of his book. This and the index offer the interested reader (and who could possibly be left indifferent?) a whole new way (“more associative than deductive”) of re-reading the book. And believe me, you will want to. Writing with passion for a nonmathematical audience, Ellenberg is a popular educator at the top of his game.

Nothing happens without a reason

Reading Journey to the Edge of Reason: The Life of Kurt Gödel by Stephen Budiansky for the Spectator, 29 May 2021

The 20th-century Austrian mathematician Kurt Gödel did his level best to live in the world as his philosophical hero Gottfried Wilhelm Leibnitz imagined it: a place of pre-established harmony, whose patterns are accessible to reason.

It’s an optimistic world, and a theological one: a universe presided over by a God who does not play dice. It’s most decidedly not a 20th-century world, but “in any case”, as Gödel himself once commented, “there is no reason to trust blindly in the spirit of the time.”

His fellow mathematician Paul Erdös was appalled: “You became a mathematician so that people should study you,” he complained, “not that you should study Leibnitz.” But Gödel always did prefer study to self-expression, and is this is chiefly why we know so little about him, and why the spectacular deterioration of his final years — a fantasmagoric tale of imagined conspiracies, strange vapours and shadowy intruders, ending in his self-starvation in 1978 — has come to stand for the whole of his life.

“Nothing, Gödel believed, happened without a reason,” says Stephen Burdiansky. “It was at once an affirmation of ultrarationalism, and a recipe for utter paranoia.”

You need hindsight to see the paranoia waiting to pounce. But the ultrarationalism — that was always tripping him up. There was something worryingly non-stick about him. He didn’t so much resist the spirit of the time as blunder about totally oblivious of it. He barely noticed the Anschluss, barely escaped Vienna as the Nazis assumed control, and, once ensconced at the Institute for Advanced Study at Princeton, barely credited that tragedy was even possible, or that, say, a friend might die in a concentration camp (it took three letters for his mother to convince him).

Many believed that he’d blundered, in a way typical to him, into marriage with his life-long partner, a foot-care specialist and divorcée called Adele Nimbursky. Perhaps he did. But Burdiansky does a spirited job of defending this “uneducated but determined” woman against the sneers of snobs. If anyone kept Gödel rooted to the facts of living, it was Adele. She once stuck a concrete flamingo, painted pink and black, in a flower bed right outside his study window. All evidence suggests he adored it.

Idealistic and dysfunctional, Gödel became, in mathematician Jordan Ellenberg’s phrase, “the romantic’s favourite mathematician”, a reputation cemented by the fact that we knew hardly anything about him. Key personal correspondence was destroyed at his death, while his journals and notebooks — written in Gabelsberger script, a German shorthand that had fallen into disuse by the mid-1920s — resisted all-comers until Cheryl Dawson, wife of the man tasked with sorting through Gödel’s mountain of posthumous papers — learned how to transcribe it all.

Biographer Stephen Budiansky is the first to try to give this pile of new information a human shape, and my guess is it hasn’t been easy.

Burdiansky handles the mathematics very well, capturing the air of scientific optimism that held sway over the intellectual Vienna and induced Germany’s leading mathematician David Hilbert to declare that “in mathematics there is *nothing* unknowable!”

Solving Hilbert’s four “Problems of Laying Foundations for Mathematics” of 1928 was supposed to secure the foundations of mathematics for good, and Gödel, a 22-year-old former physics student, solved one of them. Unfortunately for Hilbert and his disciples, however, Gödel also proved the insolubility of the other three. So much for the idea that all mathematics could be derived from the propositions of logic: Gödel demonstrated that logic itself was flawed.

This discovery didn’t worry Gödel nearly so much as it did his contemporaries. For Gödel, as Burdiansky explains, “Mathematical objects and a priori truth was as real to him as anything the senses could directly perceive.” If our reason failed, well, that was no reason to throw away the world: we would always be able to recognise some truths through intuition that could never be established through computation. That, for Gödel, was the whole point of being human.

It’s one thing to be a Platonist in a world dead set against Platonism, or an idealist in the world that’s gone all-in with materialism. It’s quite another to see acts of sabotage in the errors of TV listings magazines, or political conspiracy in the suicide of King Ludwig II of Bavaria. The Elysian calm and concentration afforded Gödel after the second world war at the Institute of Advanced Study probably did him more harm than good. “Gödel is too alone,” his friend Oskar Morgenstern fretted: “he should be given teaching duties; at least an hour a week.”

In the end, though, neither his friendships nor his marriage nor that ridiculous flamingo could tether to the Earth a man who had always preferred to write for his desk drawer, and Burdiansky, for all his tremendous efforts and exhaustive interrogations of Godel’s times and places, acquaintances and offices, can only leave us, at the end, with an immeasurably enriched version of Gödel the wise child. It’s an undeniably distracting and reductive picture. But — and this is the trouble — it’s not wrong.