A cherry is a cherry is a cherry

Life is Simple: How Occam’s Razor Sets Science Free and Shapes the Universe
by Johnjoe McFadden, reviewed for the Spectator, 28 August 2021

Astonishing, where an idea can lead you. You start with something that, 800 years hence, will sound like it’s being taught at kindergarten: Fathers are fathers, not because they are filled with some “essence of fatherhood”, but because they have children.

Fast forward a few years, and the Pope is trying to have you killed.

Not only have you run roughshod over his beloved eucharist (justified, till then, by some very dodgy Aristotelian logic-chopping); you’re also saying there’s no “essence of kinghood”, neither. If kings are only kings because they have subjects, then, said William of Occam, “power should not be entrusted to anyone without the consent of all”. Heady stuff for 1334.

How this progression of thought birthed the very idea of modern science, is the subject of what may be the most sheerly enjoyable history of science of recent years.

William was born around 1288 in the little town of Ockham in Surrey. He was probably an orphan; at any rate he was given to the Franciscan order around the age of eleven. He shone at Greyfriars in London, and around 1310 was dispatched to Oxford’s newfangled university.

All manner of intellectual, theological and political shenanigans followed, mostly to do with William’s efforts to demolish almost the entire edifice of medieval philosophy.

It needed demolishing, and that’s because it still held to Aristotle’s ideas about what an object is. Aristotle wondered how single objects and multiples can co-exist. His solution: categorise everything. A cherry is a cherry is a cherry, and all cherries have cherryness in common. A cherry is a “universal”; the properties that might distinguish one cherry from another are “accidental”.

The trouble with Aristotle’s universals, though, is that they assume a one-to-one correspondence between word and thing, and posit a universe made up of a terrifying number of unique things — at least one for each noun or verb in the language.

And the problem with that is that it’s an engine for making mistakes.

Medieval philosophy relied largely on syllogistic reasoning, juggling things into logical-looking relations. “Socrates is a man, all men are mortal, so Socrates is mortal.”

So he is, but — and this is crucial — this conclusion is arrived at more by luck than good judgement. The statement isn’t “true” in any sense; it’s merely internally consistent.

Imagine we make a mistake. Imagine we spring from a society where beards are pretty much de rigeur (classical Athens, say, or Farringdon Road). Imagine we said, “Socrates is a man, all men have beards, therefore Socrates has a beard”?

Though one of its premises is wrong, the statement barrels ahead regardless; it’s internally consistent, and so, if you’re not paying attention, it creates the appearance of truth.

But there’s worse: the argument that gives Socates a beard might actually be true. Some men do have beards. Socrates may be one of them. And if he is, that beard seems — again, if you’re not paying attention — to confirm a false assertion.

William of Occam understood that our relationship with the world is a lot looser, cloudier, and more indeterminate than syllogistic logic allows. That’s why, when a tavern owner hangs a barrel hoop outside his house, passing travellers know they can stop there for a drink. The moment words are decoupled from things, then they act as signs, negotiating flexibly with a world of blooming, buzzing confusion.

Once we take this idea to heart, then very quickly — and as a matter of taste more than anything — we discover how much more powerful straightforward explanations are than complicated ones. Occam came up with a number of versions of what even then was not an entirely new idea: “It is futile to do with more what can be done with less,” he once remarked. Subsequent formulations do little but gild this lily.

His idea proved so powerful, three centuries later the French theologian Libert Froidmont coined the term “Occam’s razor”, to describe how we arrive at good explanations by shaving away excess complexity. As McFadden shows, that razor’s still doing useful work.

Life is Simple is primarily a history of science, tracing William’s dangerous idea through astronomy, cosmology, physics and biology, from Copernicus to Brahe, Kepler to Newton, Darwin to Mendel, Einstein to Noether to Weyl. But McFadden never loses sight of William’s staggering, in some ways deplorable influence over the human psyche as a whole. For if words are independent of things, how do we know what’s true?

Thanks to William of Occam, we don’t. The universe, after Occam, is unknowable. Yes, we can come up with explanations of things, and test them against observation and experience; but from here on in, our only test of truth will be utility. Ptolemy’s 2nd-century Almagest, a truly florid description of the motions of the stars and planetary paths, is not and never will be *wrong*; the worst we can say is that it’s overcomplicated.

In the Coen brothers’ movie The Big Lebowski, an exasperated Dude turns on his friend: “You’re not *wrong*, Walter” he cries, “you’re just an asshole.” William of Occam is our universal Walter, and the first prophet of our disenchantment. He’s the friend we wish we’d never listened to, when he told us Father Christmas was not real.

The Art of Conjecturing

Reading Katy Börner’s Atlas of Forecasts: Modeling and mapping desirable futures for New Scientist, 18 August 2021

My leafy, fairly affluent corner of south London has a traffic congestion problem, and to solve it, there’s a plan to close certain roads. You can imagine the furore: the trunk of every kerbside tree sports a protest sign. How can shutting off roads improve traffic flows?

The German mathematician Dietrich Braess answered this one back in 1968, with a graph that kept track of travel times and densities for each road link, and distinguished between flows that are optimal for all cars, and flows optimised for each individual car.

On a Paradox of Traffic Planning is a fine example of how a mathematical model predicts and resolves a real-world problem.

This and over 1,300 other models, maps and forecasts feature in the references to Katy Börner’s latest atlas, which is the third to be derived from Indiana University’s traveling exhibit Places & Spaces: Mapping Science.

Atlas of Science: Visualizing What We Know (2010) revealed the power of maps in science; Atlas of Knowledge: Anyone Can Map (2015), focused on visualisation. In her third and final foray, Börner is out to show how models, maps and forecasts inform decision-making in education, science, technology, and policymaking. It’s a well-structured, heavyweight argument, supported by descriptions of over 300 model applications.

Some entries, like Bernard H. Porter’s Map of Physics of 1939, earn their place thanks purely to their beauty and for the insights they offer. Mostly, though, Börner chooses models that were applied in practice and made a positive difference.

Her historical range is impressive. We begin at equations (did you know Newton’s law of universal gravitation has been applied to human migration patterns and international trade?) and move through the centuries, tipping a wink to Jacob Bernoulli’s “The Art of Conjecturing” of 1713 (which introduced probability theory) and James Clerk Maxwell’s 1868 paper “On Governors” (an early gesture at cybernetics) until we arrive at our current era of massive computation and ever-more complex model building.

It’s here that interesting questions start to surface. To forecast the behaviour of complex systems, especially those which contain a human component, many current researchers reach for something called “agent-based modeling” (ABM) in which discrete autonomous agents interact with each other and with their common (digitally modelled) environment.

Heady stuff, no doubt. But, says Börner, “ABMs in general have very few analytical tools by which they can be studied, and often no backward sensitivity analysis can be performed because of the large number of parameters and dynamical rules involved.”

In other words, an ABM model offers the researcher an exquisitely detailed forecast, but no clear way of knowing why the model has drawn the conclusions it has — a risky state of affairs, given that all its data is ultimately provided by eccentric, foible-ridden human beings.

Börner’s sumptuous, detailed book tackles issues of error and bias head-on, but she left me tugging at a still bigger problem, represented by those irate protest signs smothering my neighbourhood.

If, over 50 years since the maths was published, reasonably wealthy, mostly well-educated people in comfortable surroundings have remained ignorant of how traffic flows work, what are the chances that the rest of us, industrious and preoccupied as we are, will ever really understand, or trust, all the many other models which increasingly dictate our civic life?

Borner argues that modelling data can counteract misinformation, tribalism, authoritarianism, demonization, and magical thinking.

I can’t for the life of me see how. Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” What happens when a model reaches such complexity, only an expert can really understand it, or when even the expert can’t be entirely sure why the forecast is saying what it’s saying?

We have enough difficulty understanding climate forecasts, let alone explaining them. To apply these technologies to the civic realm begs a host of problems that are nothing to do with the technology, and everything to do with whether anyone will be listening.

The old heave-ho

The Story of Work: A New History of Humankind by Jan Lucassen, reviewed for the Telegraph 14 August 2021

“How,” asks Dutch social historian Jan Lucassen, “could people accept that the work of one person was rewarded less than that of another, that one might even be able to force the other to do certain work?”

The Story of Work is just that: a history of work (paid or otherwise, ritual or for a wage, in the home or out of it) from peasant farming in the first agrarian societies to gig-work in the post-Covid ruins of the high street, and spanning the historical experiences of working people on all five inhabited continents. The writing is, on the whole, much better than the sentence you just read, but no less exhausting. At worst, it put me in mind of the work of English social historian David Kynaston; super-precise prose stitched together to create an unreadably compacted narrative.

For all its abstractions, contractions and signposting, however, The Story of Work is full of colour, surprise and human warmth. What other social history do you know writes off the Industrial Revolution as a net loss to music? “Just think of the noise from rattling machines that made it impossible to talk,” Lucassen writes, “in contrast to small workplaces or among larger troupes of workers who mollified work in the open air by singing shanties and other work songs.”

For 98 per cent of our species’ history we lived lives of reciprocal altruism in hunting-and-gathering clan groups. With the advent of farming and the formation of the first towns came surpluses and, for the first time, the feasibility of distributing resources unequally.

At first, conspicuous generosity ameliorated the unfairnesses. As the sixteenth-century French judge Étienne de la Boétie wrote: “theatres, games, plays, spectacles, marvellous beasts, medals, tableaux, and other such drugs were for the people of antiquity the allurements of serfdom, the price of their freedom, the tools of tyranny.” (The Story of Work is full of riches of this sort: strip off the narrative, and there’s a cracking miscellany still to enjoy.)

Lucassen diverges from the popular narrative (in which the invention of agriculture is the fount of all our ills) on several points. First, agricultural societies do not inevitably become marketplaces. Bantu-speaking agriculturalists spread across central, eastern and southern Africa between 3500 BCE and 500 CE, while maintaining perfect equality. “Agriculture and egalitarianism are compatible,“ says Lucassen.

It’s not the crops, but the livestock, that are to blame for our expulsion from hunter-gatherer Eden. If notions of private property had to arise anywhere, they surely arose, Lucassen argues, among those innocent-looking shepherds and shepherdesses, whose waterholes may have been held in common but whose livestock most certainly were not. Animals were owned by individuals or households, whose success depended on them knowing every single individual in their herd.

Having dispatched the idea that agriculture made markets, Lucassen then demolishes the idea that markets made inequality. Inequality came first. It does not take much specialism to arise within a group before some acquire more resources than others. Managing this inequality doesn’t need anything so complex as a market. All it needs is an agreement. Lucassen turns to India, and the social ideologies that gave rise, from about 600 BC, to the Upanishads and the later commentaries on the Vedas: the evolving caste system, he says, is a textbook example of how human suffering can be explained to an entire culture’s satisfaction ”without victims or perpetrators being able to or needing to change anything about the situation”.

Markets, by this light, become a way of subverting the iniquitous rhetorics cooked up by rulers and their priests. Why, then, have markets not ushered in a post-political Utopia? The problem is not to do with power. It’s to do with knowledge. Jobs used to be *hard*. They used to be intellectually demanding. Never mind the seven-year apprenticeships of Medieval Europe, what about the jobs a few are still alive to remember? Everything, from chipping slate out of a Welsh quarry to unloading a cargo boat while maintaining its trim, took what seem now to be unfeasible amounts of concentration, experience and skill.

Now, though — and even as they are getting fed rather more, and rather more fairly, than at any other time in world history — the global proletariat are being starved, by automation, of the meaning of their labour. The bloodlessness of this future is not a subject Lucassen spends a great many words on, but it informs his central and abiding worry, which is that slavery — a depressing constant in his deep history of labour — remains a constant threat and a strong future possibility. The logics of a slave economy run frighteningly close to the skin in many cultures: witness the wrinkle in the 13th Amendment of the US constitution that legalises the indentured servitude of (largely black) convicts, or the profits generated for the global garment industry by interned Uighurs in China. Automation, and its ugly sister machine surveillance, seem only to encourage such experiments in carceral capitalism.

But if workers of the world are to unite, around what banner should they gather? Lucassen identifies only two forms of social agreement that have ever reconciled us to the unfair distribution of reward. One is redistributive theocracy. “Think of classical Egypt and the pre-Columbian civilizations,” he writes, “but also of an ‘ideal state’ like the Soviet Union.”

The other is the welfare state. But while theocracies have been sustained for centuries or even millennia, the welfare state, thus far, has a shelf life of only a few decades, and is easily threatened.

Exhausted yet enlightened, any reader reaching the end of Lucassen’s marathon will understand that the problem of work runs far deeper than politics, and that the grail of a fair society will only come nearer if we pay attention to real experiences, and resist the lure of utopias.

“It’s wonderful what a kid can do with an Erector Set”

Reading Across the Airless Wilds by Earl Swift for the Times, 7 August 2021

There’s something about the moon that encourages, not just romance, not just fancy, but also a certain silliness. It was there in spades at the conference organised by the American Rocket Society in Manhattan in 1961. Time Magazine delighted in this “astonishing exhibition of the phony and the competent, the trivial and the magnificent.” (“It’s wonderful what a kid can do with an Erector Set”, one visiting engineer remarked.)

But the designs on show thefre were hardly any more bizarre than those put forward by the great minds of the era. The German rocket pioneer Hermann Oberth wrote an entire book advocating a moon car that could, if necessary, pogo-stick about the satellite. When Howard Seifert, the American Rocket Society’s president, advocated abandoning the car and preserving the pogo stick — well, Siefert’s “platform” might not have made it to the top of NASA’s favoured designs for a moon vehicle, but it was taken seriously.

Earl Swift is not above a bit of fun and wonder, but the main job of Across the Airless Wilds (a forbiddingly po-faced title for such an enjoyable book) is to explain how the oddness of the place — barren, airless, and boasting just one-sixth Earth’s gravity — tended to favour some very odd design solutions. True, NASA’s lunar rover, which actually flew on the last three Apollo missions, looks relatively normal, like a car (or at any rate, a go-kart). But this was really to do with weight constraints, budgets and historical accidents; a future in which the moon is explored by pogo-stick is still not quite out of the running.

For all its many rabbit-holes, this is a clear and compelling story about three men: Sam Romano, boss of General Motors’s lunar program, his visionary off-road specialist Mieczyslaw Gregory Bekker (Greg to his American friends) and Greg’s invaluable engineer Ferenc (Frank) Pavlics. These three were toying with the possibility of moon vehicles a full two years before the US boasted any astronauts, and the problems they confronted were not trivial. Until Bekker came along, tyres, wheels and tracks for different surfaces were developed more or less through informed trial and error. It was Bekker who treated off-roading as an intellectual puzzle as rigorous as the effort to establish the relationship between a ship’s hull and water, or a plane’s wing and the air it rides.

Not that rigour could gain much toe-hold in the early days of lunar design, since no-one could be sure what the consistency of the moon’s surface actually was. It was probably no dustier than an Earthbound desert, but there was always the nagging possibility that a spacecraft and its crew, landing on a convenient lunar plain, might vanish into some ghastly talcum quicksand.

On 3 February 1966 the Soviet probe Luna 9 put paid to that idea, settling, firmly and without incident, onto the Ocean of Storms. Though their plans for a manned mission had been abandoned, the Soviets were no bit player. Four years later it was an eight-wheel Soviet robot, Lunokhod-17, that first drove across the moon’s surface. Seven feet long and four feet tall, it upstaged NASA’s rovers nicely, with its months and miles of journey time, 25 soil samples and literally thousands of photographs.

Meanwhile NASA was having to re-imagine its Lunar Roving Vehicle any number of times, as it sought to wring every possible ounce of value from a programme that was being slashed by Congress a good year before Neil Armstrong even set foot on the Moon.

Conceived when it was assumed Apollo would be the first chapter in a long campaign of exploration and settlement, the LRV was being shrunk and squeezed and simplified to fit through an ever-tightening window of opportunity. This is the historical meat of Swift’s book, and he handles the technical, institutional and commercial complexities of the effort with a dramatist’s eye.

Apollo was supposed to pave the way for two-rocket missions. When they vanished from the schedule, the rover’s future hung in doubt. Without a second Saturn to carry cargo, any rover bound for the moon would have to be carried on the same lunar module that carried the crew. No-one knew if this was even possible.

There was, however, one wedge-shaped cavity still free between the descent stage’s legs: an awkward triangle “about the size and shape of a pup tent standing on its end.” So it was that the LRV, tht once boasted six wheels and a pressurised cabin, ended up the machine a Brompton folding bike wants to be when it grows up.

Ironically, it was NASA’s dwindling prospects post-Apollo that convinced its managers to origami something into that tiny space, just a shade over seventeen months prior to launch. Why not wring as much value out of Apollo’s last missions as possible?

The result was a triumph, though it maybe didn’t look like one. Its seats were basically deckchairs. It had neither roof, nor body. There was no steering wheel, just a T-bar the astronaut lent on. It weighed no more than one fully kitted-out astronaut, and its electric motors ground out just one horsepower. On the flat, it reached barely ten miles an hour.

But it was superbly designed for the moon, where a turn at 6MPH had it fishtailing like a speedboat, even as it bore more than twice its weight around an area the size of Manhattan.

In a market already oversaturated with books celebrating the 50th anniversary of Apollo in 2019 (many of them very good indeed) Swift finds his niche. He’s not narrow: there’s plenty of familiar context here, including a powerful sketch of the former Nazi rocket scientist Wernher von Braun. He’s not especially folksy, or willfully eccentric: the lunar rover was a key element in the Apollo program, and he wants it taken seriously. Swift finds his place by much more ingenious means — by up-ending the Apollo narrative entirely (he would say he was turning it right-side up) so that every earlier American venture into space was preparation for the last three trips to the moon.

He sets out his stall early, drawing a striking contrast between the travails of Apollo 14 astronauts Alan Shepard Jr and Edgar Mitchell — slugging half a mile up the the wall of the wrong crater, dragging a cart — with the vehicular hijinks of Apollo 15’s Dave Scott and Jim Irwin, crossing a mile of hummocky, cratered terrain rimmed on two sides by mountains the size of Everest, to a spectacular gorge, then following its edge to the foot of a huge mountain, then driving up its side.

Detailed, thrilling accounts of the two subsequent Rover-equipped Apollo missions, Apollo 16 in the Descartes highlands and Apollo 17 in the Taurus-Littrow Valley, carry the pointed message that the viewing public began to tune out of Apollo just as the science, the tech, and the adventure had gotten started.

Swift conveys the baffling, unreadable lunar landscape very well, but Across the Airless Wilds is above all a human story, and a triumphant one at that, about NASA’s most-loved machine. “Everybody you meet will tell you he worked on the rover,” remarks Eugene Cowart, Boeing’s chief engineer on the project. “You can’t find anybody who didn’t work on this thing.”

Nothing but the truth

Reading The Believer by Ralph Blumenthal for the Times, 24 July 2021

In September 1965 John Fuller, a columnist for the Saturday Review in New York, was criss-crossing Rockingham County in New Hampshire in pursuit of a rash of UFO sightings, when he stumbled upon a darker story — one so unlikely, he didn’t follow it up straight away.

Not far from the local Pease Air Force base, a New Hampshire couple had been abducted and experimented upon by aliens.

Every few years, ever since the end of the Second World War, others had claimed similar experiences. But they were few and scattered, their accounts were incredible and florid, and there was never any corroborating physical evidence for their allegations. It took decades before anyone in academia took an interest in their plight.

In January 1990 the artist Budd Hopkins, whose Intruders Foundation provided support for “experiencers” — alleged victims of alien abduction — was visited by John Edward Mack, head of psychiatry at Harvard’s medical school. Mack’s interest had been piqued by his friend the psychoanalyst Robert Lifton. An old hand at treating severe trauma, particularly among Hiroshima survivors and Vietnam veterans, Lifton found himself stumped when dealing with experiencers: “It wasn’t clear to me or to anybody else exactly what the trauma was.”

Mack was immediately intrigued. Highly strung, narcissistic, psychologically damaged by his mother’s early death, Mack needed a deep intellectual project to hold himself together. He was interested in how perceptions and beliefs about reality shape society. A Prince of Our Disorder, his Pulitzer Prize-winning psychological biography of T E Lawrence, was his most intimate statement on the subject. Work on the psychology of the Cold War had drawn him into anti-nuclear activism, and close association with the International Physicians for the Prevention of Nuclear War, which won a Nobel peace prize in 1985. The institutions he created to explore the frontiers of human experience survive today in the form of the John E. Mack Institute, dedicated “to further[ing] the evolution of the paradigms by which we understand human identity”.

Just as important, though, Mack enjoyed helping people, and he was good at it. In 1964 he had established mental health services in Cambridge, Mass., where hundreds of thousands were without any mental health provision at all. As a practitioner, he had worked particularly with children and adolescents, had treated suicidal patients, and published research on heroin addiction.

Whitley Streiber (whose book Communion, about his own alien abduction, is one of the single most disturbing books ever to reach the bestseller lists) observed how Mack approached experiencers: “He very intentionally did not want to look too deeply into the anomalous aspects of the reports,” Streiber writes. “He felt his approach as a physician should be to not look beyond the narrative but to approach it as a source of information about the individual’s state.”

But what was Mack opening himself up to? What to make of all that abuse, pain, paralysis, loss of volition and forced ejaculation? In 1992, at a forum for work-in-progress, Mack explained, “There’s a great deal of curiosity they [the alien abductors] seem to have in staring at us, particularly in sexual situations. Often there are hybrid infants that seem to be the result of alien-human sexual cohabitation.”

Experiencers were traumatised, but not just traumatised. “When I got home,” said one South African experiencer, “it was like the world, all the trees would just go down, and there’d be no air and people would be dying.”

Experiencers reported a pressing, painful awareness of impending environmental catastrophe; also a tremendous sense of empathy, extending across the whole living world. Some felt optimistic, even euphoric: for these were now recruited in a project to save life on Earth. as part, they explained, of the aliens’ breeding programme.

John Mack championed hypnotic regression, as a means of helping his clients discover buried memories. Ralph Blumenthal, a reporter for the New York Times, is careful not to use hindsight to condemn this approach, but as he explains, the satanic abuse scandals that erupted in the 1990s were to reveal just how easily false memories can be implanted, even inadvertently, in people made suggestible by hypnosis.

In May 1994 the Dean of Harvard Medical School appointed a committee of peers to confidentially review Mack’s interactions with experiencers. Mack was exonerated. Still, it was a serious and reputationally damaging shot across the bows, in a field coming to grips with the reality of implanted and false memories.

Passionate, unfaithful, a man for whom life was often “just a series of obligations”, Mack did not so much “go off the deep end” after that as wade, steadily and with determination, into ever deeper water. The saddest passage in Blumenthal’s book describes Mack’s trip in 2004 to Stonehenge in Wiltshire. Surrounded by farm equipment that could easily have been used to create them, Mack absorbs the cosmic energy of crop circles and declares, “There isn’t anybody in the world who’s going to convince me this is manmade.”

Blumenthal steers his narrative deftly between the crashing rocks of breathless credulity on the one hand, and psychoanalytic second-guessing on the other. Drop all mention of the extraterrestrials, and The Believer remains a riveting human document. Mack’s abilities, his brilliance, flaws, hubris, and mania, are anatomised with a painful sensitivity. Readers will close the book wiser than when they opened it, and painfully aware of what they do not and perhaps can never know about Mack, about extraterrestrials, and about the nature of truth.

Mack became a man easy to dismiss. His “experiencers” remain, however, “blurring ontological categories in defiance of all our understandings of how things operate in the world”. Time and again, Blumenthal comes back to this: there’s no pathology to explain them. Not alcoholism. Not mental illness. Not sexual abuse. Not even a desire for attention. Aliens are engaged in a breakneck planet-saving obstetric intervention, involving probes. You may not like it. You may point to the lack of any physical evidence for it. But — and here Blumenthal holds the reader quite firmly and thrillingly to the ontological razor’s edge — you cannot say it’s all in people’s heads. You have no solid reason at all, beyond incredulity, to suppose that abductees are telling you anything other than the truth.

Sod provenance

Is the digital revolution that Pixar began with Toy Story stifling art – or saving it? An article for the Telegraph, 24 July 2021

In 2011 the Westfield shopping mall in Stratford, East London, acquired a new public artwork: a digital waterfall by the Shoreditch-based Jason Bruges Studio. The liquid-crystal facets of the 12 metre high sculpture form a subtle semi-random flickering display, as though water were pouring down its sides. Depending on the shopper’s mood, this either slakes their visual appetite, or leaves them gasping for a glimpse of real rocks, real water, real life.

Over its ten-year life, Bruges’s piece has gone from being a comment about natural processes (so soothing, so various, so predictable!) to being a comment about digital images, a nagging reminder that underneath the apparent smoothness of our media lurks the jagged line and the stair-stepped edge, the grid, the square: the pixel, in other words.

We suspect that the digital world is grainier than the real, coarser, more constricted, and stubbornly rectilinear. But this is a prejudice, and one that’s neatly punctured by a new book by electrical engineer and Pixar co-founder Alvy Ray Smith, “A Biography of the Pixel”. This eccentric work traces the intellectual genealogy of Toy Story (Pixar’s first feature-length computer animation in 1995) over bump-maps and around occlusions, along traced rays and through endless samples, computations and transformations, back to the mathematics of the eighteenth century.

Smith’s whig history is a little hard to take — as though, say, Joseph Fourier’s efforts in 1822 to visualise how heat passed through solids were merely a way-station on the way to Buzz Lightyear’s calamitous launch from the banister rail — but it’s a superb short-hand in which to explain the science.

We can use Fourier’s mathematics to record an image as a series of waves. (Visual patterns, patterns of light and shade and movement, “can be represented by the voltage patterns in a machine,” Smith explains.) And we can recreate these waves, and the image they represent, with perfect fidelity, so long as we have a record of the points at the crests and troughs of each wave.

The locations of these high- and low-points, recorded as numerical coordinates, are pixels. (The little dots you see if you stare far too closely at your computer screen are not pixels; strictly speaking, they’re “display elements”.)

Digital media do not cut up the world into little squares. (Only crappy screens do that). They don’t paint by numbers. On the contrary, they faithfully mimic patterns in the real world.

This leads Smith to his wonderfully upside-down-sounding catch-line: “Reality,” he says, ”is just a convenient measure of complexity.”

Once pixels are converted to images on a screen, they can be used to create any world, rooted in any geometry, and obeying any physics. And yet these possibilities remain largely unexplored. Almost every computer animation is shot through a fictitious “camera lens”, faithfully recording a Euclidean landscape. Why are digital animations so conservative?

I think this is the wrong question: its assumptions are faulty. I think the ability to ape reality at such high fidelity creates compelling and radical possibilities of its own.

I discussed some of these possibilities with Paul Franklin, co-founder of the SFX company DNEG, and who won Oscars for his work on Christopher Nolan’s sci-fi blockbusters Interstellar (2014) and Inception (2010). Franklin says the digital technologies appearing on film sets in the past decade — from lighter cameras and cooler lights to 3-D printed props and LED front-projection screens — are positively disrupting the way films are made. They are making film sets creative spaces once again, and giving the director and camera crew more opportunities for on-the-fly creative decision making. “We used a front-projection screen on the film Interstellar, so the actors could see what visual effects they were supposed to be responding to,” he remembers. “The actors loved being able to see the super-massive black hole they were supposed to be hurtling towards. Then we realised that we could capture an image of the rotating black hole’s disc reflecting in Matthew McConaughey’s helmet: now that’s not the sort of shot you plan.”

Now those projection screens are interactive. Franklin explains: “Say I’m looking down a big corridor. As I move the camera across the screen, instead of it flattening off and giving away the fact that it’s actually just a scenic backing, the corridor moves with the correct perspective, creating the illusion of a huge volume of space beyond the screen itself.“

Effects can be added to a shot in real-time, and in full view of cast and crew. More to the point, what the director sees through their viewfinder is what the audience gets. This encourages the sort of disciplined and creative filmmaking Melies and Chaplin would recognise, and spells an end to the deplorable industry habit of kicking important creative decisions into the long grass of post-production.

What’s taking shape here isn’t a “good enough for TV” reality. This is a “good enough to reveal truths” reality. (Gargantua, the spinning black hole at Interstellar’s climax, was calculated and rendered so meticulously, it ended up in a paper for the journal Classical and Quantum Gravity.) In some settings, digital facsimile is becoming, literally, a replacement reality.

In 2012 the EU High Representative Baroness Ashton gave a physical facsimile of the burial chamber of Tutankhamun to the people of Egypt. The digital studio responsible for its creation, Factum Foundation, has been working in the Valley of the Kings since 2001, creating ever-more faithful copies of places that were never meant to be visited. They also print paintings (by Velasquez, by Murillo, by Raphael…) that are indistinguishable from the originals.

From the perspective of this burgeoning replacement reality, much that is currently considered radical in the art world appears no more than a frantic shoring-up of old ideas and exhausted values. A couple of days ago Damien Hirst launched The Currency, a physical set of dot paintings the digitally tokenised images of which can be purchased, traded, and exchanged for the real paintings.

Eventually the purchaser has to choose whether to retain the token, or trade it in for the physical picture. They can’t own both. This, says Hirst, is supposed to challenge the concept of value through money and art. Every participant is confronted with their perception of value, and how it influences their decision.

But hang on: doesn’t money already do this? Isn’t this what money actually is?

It can be no accident that non-fungible tokens (NFTs), which make bits of the internet ownable, have emerged even as the same digital technologies are actually erasing the value of provenance in the real world. There is nothing sillier, or more dated looking, than the Neues Museum’s scan of its iconic bust of Nefertiti, released free to the public after a complex three-year legal battle. It comes complete with a copyright license in the bottom of the bust itself — a copyright claim to the scan of a 3,000-year-old sculpture created 3,000 miles away.

Digital technologies will not destroy art, but they will erode and ultimately extinguish the value of an artwork’s physical provenance. Once facsimiles become indistinguishable from originals, then originals will be considered mere “first editions”.

Of course literature has thrived for many centuries in such an environment; why should the same environment damage art? That would happen only if art had somehow already been reduced to a mere vehicle for financial speculation. As if!

 

Eagle-eyed eagles and blind, breathless fish

Secret Worlds: The extraordinary senses of animals by Martin Stevens, reviewed for New Scientist, 21 July 2021

Echo-locating bats use ultrasound to map their lightless surroundings. The information they gather is fine-grained — they can tell the difference between the wing cases and bodies of a beetle, and the scales of a moth’s wings. The extremely high frequency of ultrasound — far beyond our own ability to hear — generates clearer, less “blurry” sonic images. And we should be jolly glad bats use it, and these creatures are seriously noisy. A single bat, out for lunch, screams at around 140 decibels. Someone shouting a metre away generates only 90.

Since 2013, when his textbook Sensory Ecology, Behaviour, and Evolution was published, Martin Stevens, a professor at Exeter University in the UK, has had it in mind to write a popular version — a book that, while paying its dues to the extraordinary sensory abilities of animals, also has something to say about the evolution and plasticity of the senses, and above all the cost of acquiring them.

“Rather than seeing countless species all around us, each with every single one of their sense being a pinnacle of what is possible,” he writes, “we instead observe that evolution and development has honed those senses that the animal needs most, and scaled back on the others.” For every eagle-eyed, erm, eagle, there is a blind fish.

Stevens presents startling data about the expense involved in sensing the world. A full tenth of the energy used by a blowfly (Calliphora vicina) at rest is used up maintaining its photoreceptors and associated nerve cells.

Stevens also highlights some remarkable cost-saving strategies. The ogre-faced spider from Australia (Deinopsis subrufa) has such large, sensitive and expensive-to-maintain eyes, it breaks down photoreceptors and membranes during the day, and regenerates them at night in order to hunt.

Senses are too expensive to stick around when they’re not needed; so they disappear and reappear over evolutionary time. Their genetic mechanisms are surprisingly parsimonious. The same genetic pathways crop up again and again, in quite unrelated species. The same, or similar mutations have occurred in the Prestin gene in both dolphins bats, unrelated species that both echolocate: “not surprising,” Stevens observes, “if evolution has limited genetic material to act on in the first place”.

Stevens boils his encyclopedic knowledge down to three animals per chapter, and each chapter focuses on a different sense. This rather mechanistic approach serves him surprisingly well; this is a field full of stories startling enough not to need much window-dressing. While Stevens’s main point is nature’s parsimony, it’s those wonderful extremes that will stick longest in the mind of the casual reader.

There are many examples of familiar senses brought to a rare peak. For example, the whiskers of a harbour seal (Phoca vitulina) help it find a buried flatfish by nothing more than the water flow created by the fish’s breathing.

More arresting still are the chapters devoted to senses wholly unfamiliar to us. Using their infra-red thermal receptors, vampire bats pick out particular blood vessels to bite into. Huge numbers of marine species detect minute amounts of electricity, allowing them to hunt, elude predators, and even to attract mates.

As for the magnetic sense, Stevens reckons “it is no exaggeration to say that understanding how [it] works has been one of the great mysteries in biology.”

There are two major competing theories to explain the magnetic senses, one relating to the presence of crystals in the body that react to magnetic fields, the other to light-dependent chemical processes occurring in the eyes in response to magnetic information. Trust the robin to complicate the picture still further; it seems to boast both systems, one for use in daylight and one for use in the dark!

And what of those satellite images of cows and deer that show herds lining themselves up along lines of magnetic force, their heads invariably pointing to magnetic north?

Some science writers are, if anything, over-keen to entertain. Stevens, by contrast, is the real deal: the unassuming keeper of a cabinet of true wonders.

How many holes has a straw?

Reading Jordan Ellenberg’s Shape for the Telegraph, 7 July 2021

“One can’t help feeling that, in those opening years of the 1900s, something was in the air,” writes mathematician Jordan Ellenburg.

It’s page 90, and he’s launching into the second act of his dramatic, complex history of geometry (think “History of the World in 100 Shapes”, some of them very screwy indeed).
For page after reassuring page, we’ve been introduced to symmetry, to topology, and to the kinds of notation that make sense of knotty-sounding questions like “how many holes has a straw”?

Now, though, the gloves are off, as Ellenburg records the fin de siecle’s “painful recognition of some unavoidable bubbling randomness at the very bottom of things.”
Normally when sentiments of this sort are trotted out, they’re there to introduce readers to the wild world of quantum mechanics (and, incidentally, we can expect a lot of that sort of thing in the next few years: there’s a centenary looming). Quantum’s got such a grip on our imagination, we tend to forget that it was the johnny-come-lately icing on an already fairly indigestible cake.

A good twenty years before physical reality was shown to be unreliable at small scales, mathematicians were pretzeling our very ideas of space. They had no choice: at the Louisiana Purchase Exposition in 1904, Henri Poincarre, by then the world’s most famous geometer, described how he was trying to keep reality stuck together in light of Maxwell’s famous equations of electromagnetism (Maxwell’s work absolutely refused to play nicely with space). In that talk, he came startlingly close to gazumping Einstein to a theory of relativity.
Also at the same exposition was Sir Ronald Ross, who had discovered that malaria was carried by the bite of the anopheles mosquito. He baffled and disappointed many with his presentation of an entirely mathematical model of disease transmission — the one we use today to predict, well, just about everything, from pandemics to political elections.
It’s hard to imagine two mathematical talks less alike than those of Poincarre and Ross. And yet they had something vital in common: both shook their audiences out of mere three-dimensional thinking.

And thank goodness for it: Ellenburg takes time to explain just how restrictive Euclidean thinking is. For Euclid, the first geometer, living in the 4th century BC, everything was geometry. When he multiplied two numbers, he thought of the result as the area of a rectangle. When he multiplied three numbers, he called the result a “solid’. Euclid’s geometric imagination gave us number theory; but tying mathematical values to physical experience locked him out of more or less everything else. Multiplying four numbers? Now how are you supposed to imagine that in three-dimensional space?

For the longest time, geometry seemed exhausted: a mental gym; sometimes a branch of rhetoric. (There’s a reason Lincoln’s Gettysburg Address characterises the United States as “dedicated to the proposition that all men are created equal”. A proposition is a Euclidean term, meaning a fact that follows logically from self-evident axioms.)

The more dimensions you add, however, the more capable and surprising geometry becomes. And this, thanks to runaway advances in our calculating ability, is why geometry has become our go-to manner of explanation for, well, everything. For games, for example: and extrapolating from games, for the sorts of algorithmical processes we saddle with that profoundly unhelpful label “artificial intelligence” (“artificial alternatives to intelligence” would be better).

All game-playing machines (from the chess player on my phone to DeepMind’s AlphaGo) share the same ghost, the “Markov chain”, formulated by Andrei Markov to map the probabilistic landscape generated by sequences of likely choices. An atheist before the Russian revolution, and treated with predictable shoddiness after it, Markov used his eponymous chain, rhetorically, to strangle religiose notions of free will in their cradle.

From isosceles triangles to free will is quite a leap, and by now you will surely have gathered that Shape is anything but a straight story. That’s the thing about mathematics: it does not advance; it proliferates. It’s the intellectual equivalent of Stephen Leacock’s Lord Ronald, who “flung himself upon his horse and rode madly off in all directions”.

Containing multitudes as he must, Ellenberg’s eyes grow wider and wider, his prose more and more energetic, as he moves from what geometry means to what geometry does in the modern world.

I mean no complaint (quite the contrary, actually) when I say that, by about two-thirds the way in, Ellenberg comes to resemble his friend John Horton Conway. Of this game-playing, toy-building celebrity of the maths world, who died from COVID last year, Ellenburg writes, “He wasn’t being wilfully difficult; it was just the way his mind worked, more associative than deductive. You asked him something and he told you what your question reminded him of.”
This is why Ellenberg took the trouble to draw out a mind map at the start of his book. This and the index offer the interested reader (and who could possibly be left indifferent?) a whole new way (“more associative than deductive”) of re-reading the book. And believe me, you will want to. Writing with passion for a nonmathematical audience, Ellenberg is a popular educator at the top of his game.

Just you wait

An essay on the machineries of science-fiction film, originally written for the BFI

Science fiction is about escape, about transcendence, about how, with the judicious application of technology, we might escape the bounds of time, space and the body.
Science fiction is not at all naive, and almost all of it is about why the dream fails: why the machine goes wrong, or works towards an unforeseen (sometimes catastrophic) end. More often than not science fiction enters clad in the motley of costume drama – so polished, so chromed, so complete. But there’s always a twist, a tear, a weak seam.

Science fiction takes what in other movies would be the set dressing, finery from the prop shop, and turns it into something vital: a god, a golem, a puzzle, a prison. In science fiction, it matters where you are, and how you dress, what you walk on and even what you breathe. All this stuff is contingent, you see. It slips about. It bites.

Sometimes, in this game of “It’s behind you!” less is more. Futuristic secret agent Lemmy Caution explores the streets of the distant space city Alphaville (1965) and the strangeness is all in Jean-Luc Godard’s cut, his dialogue, and the sharpest of sharp scripts. Alphaville, you see (only you don’t; you never do) is nothing more than a rhetorical veil cast over contemporary Paris.

More usually, you’ll grab whatever’s to hand: tinsel and Pan Stick and old gorilla costumes. Two years old by 1965, at least by Earth’s reckoning, William Hartnell’s Doctor was tearing up the set, and would, in other bodies and other voices, go on tearing up, tearing down and tearing through his fans’ expectations for the next 24 years, production values be damned. Bigger than its machinery, bigger even than its protagonist, Doctor Who (1963) was, in that first, long outing, never in any sense realistic, and that was its strength. You never knew where you’d end up next: a comedy, a horror flick, a Western-style showdown. The Doctor’s sonic screwdriver was the point: it said, We’re making this up as we go along.

So how did it all get going? Much as every other kind of film drama got going: with a woman in a tight dress. It is 1924: in a constructivist get- up that could spring from no other era, Aelita, Queen of Mars (actress and film director Yuliya Solntseva) peers into a truly otherworldly crystalline telescope and spies Earth, revolution, and Engineer Los. And Los, on being observed, begins to dream of her.

You’d think, from where we are now, deluged in testosterone from franchises like Transformers and Terminators, that such romantic comedy beginnings were an accident of science fiction’s history: a charming one-off. They’re not. They’re systemic. Thea von Harbou wrote novels about to-die-for women and her husband Fritz Lang placed them at the helm of science fiction movies like Metropolis (1927) and Frau im Mond (1929). The following year saw New York given a 1980s makeover in David Butler’s musical comedy Just Imagine. “In 1980 – people have serial numbers, not names,” explained Photoplay; “marriages are all arranged by the courts… Prohibition is still an issue… Men’s clothes have but one pocket. That’s on the hip… but there’s still love! ” (Griffith, 1972) Just Imagine boasted the most intricate setting ever created for a movie. 205 engineers and craftsmen took five months over an Oscar-nominated build costing $168,000. You still think this film is marginal? Just Imagine’s weird guns and weirder spaceships ended up reused in the serial Flash Gordon (1936).

How did we get from musical comedy to Keanu Reeves’s millennial Neo shopping in a virtual firearms mall? Well, by rocket, obviously. Science fiction got going just as our fascination with future machinery overtook our fascination with future fashion. Lang wanted a real rocket launch for the premiere of Frau im Mond and roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre tall liquid- propellant rocket came to nought, Oberth set about building one eleven metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily they ran out of money.

What hostile critics say is true: for a while, science fiction did become more about the machines than about the people. This was a necessary excursion, and an entertaining one: to explore the technocratic future ushered in by the New York World’s Fair of 1939–1940 and realised, one countdown after another, in the world war and cold war to come. (Science fiction is always, ultimately, about the present.) HG Wells wrote the script for Things to Come (1936). Destination Moon (1950) picked the brains of sf writer Robert Heinlein, who’d spent part of the war designing high-altitude pressure suits, to create a preternaturally accurate forecast of the first manned mission to the moon. George Pal’s Conquest of Space, five years later, based its technology on writings and designs in Collier’s Magazine by former Nazi rocket designer Wernher von Braun. In the same year, episode 20 of the first season of Walt Disney’s Wonderful World of Colour was titled Man in Space and featured narration from Braun and his close (anti-Nazi) friend and colleague Willy Ley.

Another voice from that show, TV announcer Dick Tufeld, cropped up a few years later as voice of the robot in the hit 1965 series Lost in Space, by which time science fiction could afford to calm down, take in the scenery, and even crack a smile or two. The technocratic ideal might seem sterile now, but its promise was compelling: that we’d all live lives of ease and happiness in space, the Moon or Mars, watched over by loving machines: the Robinson family’s stalwart Robot B–9, perhaps. Once clear of the frontier, there would be few enough places for danger to lurk, though if push came to shove, the Tracy family’s spectacular Thunderbirds (1965) were sure to come and save the day. Star Trek’s pleasant suburban utopias, defended in extremis by phasers that stun more than kill, are made, for all their scale and spread, no more than village neighbourhoods thanks to the magic of personal teleportation, and all are webbed into one gentle polis by tricorders so unbelievably handy and capable, it took our best minds half a century to build them for real.

Once the danger’s over though, and the sirens are silenced -– once heaven on earth (and elsewhere) is truly established – then we hit a quite sizeable snag. Gene Roddenberry was right to have pitched Star Trek to Desilu Studios as “Wagon Train to the stars”, for as Dennis Sisterson’s charming silent parody Steam Trek: the Moving Picture (1994) demonstrates, the moment you reach California, the technology that got you there loses its specialness. The day your show’s props become merely props, is the day you’re not making science fiction any more. Forget the teleport, that rappelling rope will do. Never mind the scanner: just point.
Realism can only carry you so far. Pavel Klushantsev’s grandiloquent model-making and innovative special effects – effects that Kubrick had to discover for himself over a decade later for 2001: A Space Odyssey (1968) – put children on The Moon (1965) and ballet dancers on satellite TVs (I mean TV sets on board satellites) in Road to the Stars (1957). Such humane and intelligent gestures can only accelerate the exhaustion of “realistic” SF. You feel that exhaustion in 2001: A Space Odyssey. Indeed, the boredom and incipient madness that haunt Keir Dullea and poor, boxed-in HAL on board Discovery One are the film’s chief point: that we cannot live by reason alone. We need something more.

The trouble with Utopias is they stay still, and humanity is nothing if not restless. Two decades earlier, the formal, urban costume stylings of Gattaca (1997) and The Matrix (1999) would have appeared aspirational. In context, they’re a sign of our heroes’ imprisonment in conformist plenty.

What is this “more” we’re after, then, if reason’s not enough? At very least a light show. Ideally, redemption. Miracles. Grace. Most big- budget movies cast their alien technology as magic. Forbidden Planet (1956) owes its plot to The Tempest, spellbinding audiences with outscale animations and meticulous, hand-painted fiends from the id. The altogether more friendly water probe in James Cameron’s The Abyss took hardly less work: eight months’ team effort for 75 seconds of screen time.

Arthur Clarke, co-writer on 2001 once said: “Any sufficiently advanced technology is indistinguishable from magic.” He was half right. What’s missing from his formulation is this: sufficiently advanced technology can also resemble nature – the ordinary weave and heft of life. Andrei Tarkovsky’s Solaris (1972) and Stalker (1979) both conjure up alien presences out of forests and bare plastered rooms. Imagine how advanced their technology must be to look so ordinary!

In Alien (1979) Salvador Dali’s friend H R Giger captured this process, this vanishing into the real, half-done. Where that cadaverous Space Jockey leaves off and its ship begins is anyone’s guess. Shane Carruth’s Upstream Color (2013) adds the dimension of time to this disturbing mix, putting hapless strangers in the way of an alien lifeform that’s having to bolt together its own lifecycle day by day in greenhouses and shack laboratories.

Prometheus (2012), though late to the party, serves as an unlovely emblem to this kind of story. Its pot of black goo is pure Harry Potter: magic in a jar. Once cast upon the waters, though, it’s life itself, in all its guile and terror.

Where we have trouble spotting what’s alive and what’s not – well, that’s the most fertile territory of all. Welcome to Uncanny Valley. Population: virtually everyone in contemporary science fiction cinema. Westworld (1973) and The Stepford Wives (1975) broke the first sod, and their uncanny children have never dropped far from the tree. In the opening credits of a retrodden Battlestar Galactica (2004), Number Six sways into shot, leans over a smitten human, and utters perhaps the most devastating line in all science fiction drama: “Are you alive?” Whatever else Number Six is (actress Tricia Helfer, busting her gut to create the most devasting female robot since Brigitte Helm in Metropolis), alive she most certainly is not.
The filmmaker David Cronenberg is a regular visitor to the Valley. For twenty years, from The Brood (1979) to eXistenZ (1999), he showed us how attempts to regulate the body like a machine, while personalising technology to the point where it is wearable, can only end in elegaic and deeply melancholy body horror. Cronenberg’s visceral set dressings are one of a kind, but his wider, philosophical point crops up everywhere – even in pre-watershed confections like The Six Million Dollar Man (1974–1978) and The Bionic Woman (1976–1978), whose malfunctioning (or hyperfunctioning) bionics repeatedly confronted Steve and Jaime with the need to remember what it is to be human.

Why stay human at all, if technology promises More? In René Laloux’s Fantastic Planet (1973) the gigantic Draags lead abstract and esoteric lives, astrally projecting their consciousnesses onto distant planets to pursue strange nuptials with visiting aliens. In Pi (1998) and Requiem for a Dream (2000), Darren Aronofsky charts the epic comedown of characters who, through the somewhat injudicious application of technology, have glimpsed their own posthuman possibilities.

But this sort of technologically enabled yearning doesn’t have to end badly. There’s bawdy to be had in the miscegenation of the human and the mechanical, as when in Sleeper (1973), Miles Monroe (Woody Allen) wanders haplessly into an orgasmatron, and a 1968-vintage Barbarella (Jane Fonda) causes the evil Dr Durand-Durand’s “Excessive Machine” to explode.
For all the risks, it may be that there’s an accommodation to be made one day between the humans and the machinery. Sam Bell’s mechanical companion in Moon (2009), voiced by Kevin Spacey, may sound like 2001’s malignant HAL, but it proves more than kind in the end. In Spike Jonze’s Her (2013), Theodore’s love for his phone’s new operating system acquires a surprising depth and sincerity – not least since everyone else in the movie seems permanently latched to their smartphone screen.

“… But there’s still love!” cried Photoplay, more than eighty years ago, and Photoplay is always right. It may be that science fiction cinema will rediscover its romantic roots. (Myself, I hope so.) But it may just as easily take some other direction completely. Or disappear as a genre altogether, rather as Tarkovsky’s alien technology has melted into the spoiled landscapes of Stalker. The writer and aviator Antoine de Saint- Exupery, drunk on his airborne adventures, hit the nail on the head: “The machine does not isolate man from the great problems of nature but plunges him more deeply into them.”

You think everything is science fiction now? Just you wait.