Sod provenance

Is the digital revolution that Pixar began with Toy Story stifling art – or saving it? An article for the Telegraph, 24 July 2021

In 2011 the Westfield shopping mall in Stratford, East London, acquired a new public artwork: a digital waterfall by the Shoreditch-based Jason Bruges Studio. The liquid-crystal facets of the 12 metre high sculpture form a subtle semi-random flickering display, as though water were pouring down its sides. Depending on the shopper’s mood, this either slakes their visual appetite, or leaves them gasping for a glimpse of real rocks, real water, real life.

Over its ten-year life, Bruges’s piece has gone from being a comment about natural processes (so soothing, so various, so predictable!) to being a comment about digital images, a nagging reminder that underneath the apparent smoothness of our media lurks the jagged line and the stair-stepped edge, the grid, the square: the pixel, in other words.

We suspect that the digital world is grainier than the real, coarser, more constricted, and stubbornly rectilinear. But this is a prejudice, and one that’s neatly punctured by a new book by electrical engineer and Pixar co-founder Alvy Ray Smith, “A Biography of the Pixel”. This eccentric work traces the intellectual genealogy of Toy Story (Pixar’s first feature-length computer animation in 1995) over bump-maps and around occlusions, along traced rays and through endless samples, computations and transformations, back to the mathematics of the eighteenth century.

Smith’s whig history is a little hard to take — as though, say, Joseph Fourier’s efforts in 1822 to visualise how heat passed through solids were merely a way-station on the way to Buzz Lightyear’s calamitous launch from the banister rail — but it’s a superb short-hand in which to explain the science.

We can use Fourier’s mathematics to record an image as a series of waves. (Visual patterns, patterns of light and shade and movement, “can be represented by the voltage patterns in a machine,” Smith explains.) And we can recreate these waves, and the image they represent, with perfect fidelity, so long as we have a record of the points at the crests and troughs of each wave.

The locations of these high- and low-points, recorded as numerical coordinates, are pixels. (The little dots you see if you stare far too closely at your computer screen are not pixels; strictly speaking, they’re “display elements”.)

Digital media do not cut up the world into little squares. (Only crappy screens do that). They don’t paint by numbers. On the contrary, they faithfully mimic patterns in the real world.

This leads Smith to his wonderfully upside-down-sounding catch-line: “Reality,” he says, ”is just a convenient measure of complexity.”

Once pixels are converted to images on a screen, they can be used to create any world, rooted in any geometry, and obeying any physics. And yet these possibilities remain largely unexplored. Almost every computer animation is shot through a fictitious “camera lens”, faithfully recording a Euclidean landscape. Why are digital animations so conservative?

I think this is the wrong question: its assumptions are faulty. I think the ability to ape reality at such high fidelity creates compelling and radical possibilities of its own.

I discussed some of these possibilities with Paul Franklin, co-founder of the SFX company DNEG, and who won Oscars for his work on Christopher Nolan’s sci-fi blockbusters Interstellar (2014) and Inception (2010). Franklin says the digital technologies appearing on film sets in the past decade — from lighter cameras and cooler lights to 3-D printed props and LED front-projection screens — are positively disrupting the way films are made. They are making film sets creative spaces once again, and giving the director and camera crew more opportunities for on-the-fly creative decision making. “We used a front-projection screen on the film Interstellar, so the actors could see what visual effects they were supposed to be responding to,” he remembers. “The actors loved being able to see the super-massive black hole they were supposed to be hurtling towards. Then we realised that we could capture an image of the rotating black hole’s disc reflecting in Matthew McConaughey’s helmet: now that’s not the sort of shot you plan.”

Now those projection screens are interactive. Franklin explains: “Say I’m looking down a big corridor. As I move the camera across the screen, instead of it flattening off and giving away the fact that it’s actually just a scenic backing, the corridor moves with the correct perspective, creating the illusion of a huge volume of space beyond the screen itself.“

Effects can be added to a shot in real-time, and in full view of cast and crew. More to the point, what the director sees through their viewfinder is what the audience gets. This encourages the sort of disciplined and creative filmmaking Melies and Chaplin would recognise, and spells an end to the deplorable industry habit of kicking important creative decisions into the long grass of post-production.

What’s taking shape here isn’t a “good enough for TV” reality. This is a “good enough to reveal truths” reality. (Gargantua, the spinning black hole at Interstellar’s climax, was calculated and rendered so meticulously, it ended up in a paper for the journal Classical and Quantum Gravity.) In some settings, digital facsimile is becoming, literally, a replacement reality.

In 2012 the EU High Representative Baroness Ashton gave a physical facsimile of the burial chamber of Tutankhamun to the people of Egypt. The digital studio responsible for its creation, Factum Foundation, has been working in the Valley of the Kings since 2001, creating ever-more faithful copies of places that were never meant to be visited. They also print paintings (by Velasquez, by Murillo, by Raphael…) that are indistinguishable from the originals.

From the perspective of this burgeoning replacement reality, much that is currently considered radical in the art world appears no more than a frantic shoring-up of old ideas and exhausted values. A couple of days ago Damien Hirst launched The Currency, a physical set of dot paintings the digitally tokenised images of which can be purchased, traded, and exchanged for the real paintings.

Eventually the purchaser has to choose whether to retain the token, or trade it in for the physical picture. They can’t own both. This, says Hirst, is supposed to challenge the concept of value through money and art. Every participant is confronted with their perception of value, and how it influences their decision.

But hang on: doesn’t money already do this? Isn’t this what money actually is?

It can be no accident that non-fungible tokens (NFTs), which make bits of the internet ownable, have emerged even as the same digital technologies are actually erasing the value of provenance in the real world. There is nothing sillier, or more dated looking, than the Neues Museum’s scan of its iconic bust of Nefertiti, released free to the public after a complex three-year legal battle. It comes complete with a copyright license in the bottom of the bust itself — a copyright claim to the scan of a 3,000-year-old sculpture created 3,000 miles away.

Digital technologies will not destroy art, but they will erode and ultimately extinguish the value of an artwork’s physical provenance. Once facsimiles become indistinguishable from originals, then originals will be considered mere “first editions”.

Of course literature has thrived for many centuries in such an environment; why should the same environment damage art? That would happen only if art had somehow already been reduced to a mere vehicle for financial speculation. As if!

 

Eagle-eyed eagles and blind, breathless fish

Secret Worlds: The extraordinary senses of animals by Martin Stevens, reviewed for New Scientist, 21 July 2021

Echo-locating bats use ultrasound to map their lightless surroundings. The information they gather is fine-grained — they can tell the difference between the wing cases and bodies of a beetle, and the scales of a moth’s wings. The extremely high frequency of ultrasound — far beyond our own ability to hear — generates clearer, less “blurry” sonic images. And we should be jolly glad bats use it, and these creatures are seriously noisy. A single bat, out for lunch, screams at around 140 decibels. Someone shouting a metre away generates only 90.

Since 2013, when his textbook Sensory Ecology, Behaviour, and Evolution was published, Martin Stevens, a professor at Exeter University in the UK, has had it in mind to write a popular version — a book that, while paying its dues to the extraordinary sensory abilities of animals, also has something to say about the evolution and plasticity of the senses, and above all the cost of acquiring them.

“Rather than seeing countless species all around us, each with every single one of their sense being a pinnacle of what is possible,” he writes, “we instead observe that evolution and development has honed those senses that the animal needs most, and scaled back on the others.” For every eagle-eyed, erm, eagle, there is a blind fish.

Stevens presents startling data about the expense involved in sensing the world. A full tenth of the energy used by a blowfly (Calliphora vicina) at rest is used up maintaining its photoreceptors and associated nerve cells.

Stevens also highlights some remarkable cost-saving strategies. The ogre-faced spider from Australia (Deinopsis subrufa) has such large, sensitive and expensive-to-maintain eyes, it breaks down photoreceptors and membranes during the day, and regenerates them at night in order to hunt.

Senses are too expensive to stick around when they’re not needed; so they disappear and reappear over evolutionary time. Their genetic mechanisms are surprisingly parsimonious. The same genetic pathways crop up again and again, in quite unrelated species. The same, or similar mutations have occurred in the Prestin gene in both dolphins bats, unrelated species that both echolocate: “not surprising,” Stevens observes, “if evolution has limited genetic material to act on in the first place”.

Stevens boils his encyclopedic knowledge down to three animals per chapter, and each chapter focuses on a different sense. This rather mechanistic approach serves him surprisingly well; this is a field full of stories startling enough not to need much window-dressing. While Stevens’s main point is nature’s parsimony, it’s those wonderful extremes that will stick longest in the mind of the casual reader.

There are many examples of familiar senses brought to a rare peak. For example, the whiskers of a harbour seal (Phoca vitulina) help it find a buried flatfish by nothing more than the water flow created by the fish’s breathing.

More arresting still are the chapters devoted to senses wholly unfamiliar to us. Using their infra-red thermal receptors, vampire bats pick out particular blood vessels to bite into. Huge numbers of marine species detect minute amounts of electricity, allowing them to hunt, elude predators, and even to attract mates.

As for the magnetic sense, Stevens reckons “it is no exaggeration to say that understanding how [it] works has been one of the great mysteries in biology.”

There are two major competing theories to explain the magnetic senses, one relating to the presence of crystals in the body that react to magnetic fields, the other to light-dependent chemical processes occurring in the eyes in response to magnetic information. Trust the robin to complicate the picture still further; it seems to boast both systems, one for use in daylight and one for use in the dark!

And what of those satellite images of cows and deer that show herds lining themselves up along lines of magnetic force, their heads invariably pointing to magnetic north?

Some science writers are, if anything, over-keen to entertain. Stevens, by contrast, is the real deal: the unassuming keeper of a cabinet of true wonders.

How many holes has a straw?

Reading Jordan Ellenberg’s Shape for the Telegraph, 7 July 2021

“One can’t help feeling that, in those opening years of the 1900s, something was in the air,” writes mathematician Jordan Ellenburg.

It’s page 90, and he’s launching into the second act of his dramatic, complex history of geometry (think “History of the World in 100 Shapes”, some of them very screwy indeed).
For page after reassuring page, we’ve been introduced to symmetry, to topology, and to the kinds of notation that make sense of knotty-sounding questions like “how many holes has a straw”?

Now, though, the gloves are off, as Ellenburg records the fin de siecle’s “painful recognition of some unavoidable bubbling randomness at the very bottom of things.”
Normally when sentiments of this sort are trotted out, they’re there to introduce readers to the wild world of quantum mechanics (and, incidentally, we can expect a lot of that sort of thing in the next few years: there’s a centenary looming). Quantum’s got such a grip on our imagination, we tend to forget that it was the johnny-come-lately icing on an already fairly indigestible cake.

A good twenty years before physical reality was shown to be unreliable at small scales, mathematicians were pretzeling our very ideas of space. They had no choice: at the Louisiana Purchase Exposition in 1904, Henri Poincarre, by then the world’s most famous geometer, described how he was trying to keep reality stuck together in light of Maxwell’s famous equations of electromagnetism (Maxwell’s work absolutely refused to play nicely with space). In that talk, he came startlingly close to gazumping Einstein to a theory of relativity.
Also at the same exposition was Sir Ronald Ross, who had discovered that malaria was carried by the bite of the anopheles mosquito. He baffled and disappointed many with his presentation of an entirely mathematical model of disease transmission — the one we use today to predict, well, just about everything, from pandemics to political elections.
It’s hard to imagine two mathematical talks less alike than those of Poincarre and Ross. And yet they had something vital in common: both shook their audiences out of mere three-dimensional thinking.

And thank goodness for it: Ellenburg takes time to explain just how restrictive Euclidean thinking is. For Euclid, the first geometer, living in the 4th century BC, everything was geometry. When he multiplied two numbers, he thought of the result as the area of a rectangle. When he multiplied three numbers, he called the result a “solid’. Euclid’s geometric imagination gave us number theory; but tying mathematical values to physical experience locked him out of more or less everything else. Multiplying four numbers? Now how are you supposed to imagine that in three-dimensional space?

For the longest time, geometry seemed exhausted: a mental gym; sometimes a branch of rhetoric. (There’s a reason Lincoln’s Gettysburg Address characterises the United States as “dedicated to the proposition that all men are created equal”. A proposition is a Euclidean term, meaning a fact that follows logically from self-evident axioms.)

The more dimensions you add, however, the more capable and surprising geometry becomes. And this, thanks to runaway advances in our calculating ability, is why geometry has become our go-to manner of explanation for, well, everything. For games, for example: and extrapolating from games, for the sorts of algorithmical processes we saddle with that profoundly unhelpful label “artificial intelligence” (“artificial alternatives to intelligence” would be better).

All game-playing machines (from the chess player on my phone to DeepMind’s AlphaGo) share the same ghost, the “Markov chain”, formulated by Andrei Markov to map the probabilistic landscape generated by sequences of likely choices. An atheist before the Russian revolution, and treated with predictable shoddiness after it, Markov used his eponymous chain, rhetorically, to strangle religiose notions of free will in their cradle.

From isosceles triangles to free will is quite a leap, and by now you will surely have gathered that Shape is anything but a straight story. That’s the thing about mathematics: it does not advance; it proliferates. It’s the intellectual equivalent of Stephen Leacock’s Lord Ronald, who “flung himself upon his horse and rode madly off in all directions”.

Containing multitudes as he must, Ellenberg’s eyes grow wider and wider, his prose more and more energetic, as he moves from what geometry means to what geometry does in the modern world.

I mean no complaint (quite the contrary, actually) when I say that, by about two-thirds the way in, Ellenberg comes to resemble his friend John Horton Conway. Of this game-playing, toy-building celebrity of the maths world, who died from COVID last year, Ellenburg writes, “He wasn’t being wilfully difficult; it was just the way his mind worked, more associative than deductive. You asked him something and he told you what your question reminded him of.”
This is why Ellenberg took the trouble to draw out a mind map at the start of his book. This and the index offer the interested reader (and who could possibly be left indifferent?) a whole new way (“more associative than deductive”) of re-reading the book. And believe me, you will want to. Writing with passion for a nonmathematical audience, Ellenberg is a popular educator at the top of his game.

Just you wait

An essay on the machineries of science-fiction film, originally written for the BFI

Science fiction is about escape, about transcendence, about how, with the judicious application of technology, we might escape the bounds of time, space and the body.
Science fiction is not at all naive, and almost all of it is about why the dream fails: why the machine goes wrong, or works towards an unforeseen (sometimes catastrophic) end. More often than not science fiction enters clad in the motley of costume drama – so polished, so chromed, so complete. But there’s always a twist, a tear, a weak seam.

Science fiction takes what in other movies would be the set dressing, finery from the prop shop, and turns it into something vital: a god, a golem, a puzzle, a prison. In science fiction, it matters where you are, and how you dress, what you walk on and even what you breathe. All this stuff is contingent, you see. It slips about. It bites.

Sometimes, in this game of “It’s behind you!” less is more. Futuristic secret agent Lemmy Caution explores the streets of the distant space city Alphaville (1965) and the strangeness is all in Jean-Luc Godard’s cut, his dialogue, and the sharpest of sharp scripts. Alphaville, you see (only you don’t; you never do) is nothing more than a rhetorical veil cast over contemporary Paris.

More usually, you’ll grab whatever’s to hand: tinsel and Pan Stick and old gorilla costumes. Two years old by 1965, at least by Earth’s reckoning, William Hartnell’s Doctor was tearing up the set, and would, in other bodies and other voices, go on tearing up, tearing down and tearing through his fans’ expectations for the next 24 years, production values be damned. Bigger than its machinery, bigger even than its protagonist, Doctor Who (1963) was, in that first, long outing, never in any sense realistic, and that was its strength. You never knew where you’d end up next: a comedy, a horror flick, a Western-style showdown. The Doctor’s sonic screwdriver was the point: it said, We’re making this up as we go along.

So how did it all get going? Much as every other kind of film drama got going: with a woman in a tight dress. It is 1924: in a constructivist get- up that could spring from no other era, Aelita, Queen of Mars (actress and film director Yuliya Solntseva) peers into a truly otherworldly crystalline telescope and spies Earth, revolution, and Engineer Los. And Los, on being observed, begins to dream of her.

You’d think, from where we are now, deluged in testosterone from franchises like Transformers and Terminators, that such romantic comedy beginnings were an accident of science fiction’s history: a charming one-off. They’re not. They’re systemic. Thea von Harbou wrote novels about to-die-for women and her husband Fritz Lang placed them at the helm of science fiction movies like Metropolis (1927) and Frau im Mond (1929). The following year saw New York given a 1980s makeover in David Butler’s musical comedy Just Imagine. “In 1980 – people have serial numbers, not names,” explained Photoplay; “marriages are all arranged by the courts… Prohibition is still an issue… Men’s clothes have but one pocket. That’s on the hip… but there’s still love! ” (Griffith, 1972) Just Imagine boasted the most intricate setting ever created for a movie. 205 engineers and craftsmen took five months over an Oscar-nominated build costing $168,000. You still think this film is marginal? Just Imagine’s weird guns and weirder spaceships ended up reused in the serial Flash Gordon (1936).

How did we get from musical comedy to Keanu Reeves’s millennial Neo shopping in a virtual firearms mall? Well, by rocket, obviously. Science fiction got going just as our fascination with future machinery overtook our fascination with future fashion. Lang wanted a real rocket launch for the premiere of Frau im Mond and roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre tall liquid- propellant rocket came to nought, Oberth set about building one eleven metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily they ran out of money.

What hostile critics say is true: for a while, science fiction did become more about the machines than about the people. This was a necessary excursion, and an entertaining one: to explore the technocratic future ushered in by the New York World’s Fair of 1939–1940 and realised, one countdown after another, in the world war and cold war to come. (Science fiction is always, ultimately, about the present.) HG Wells wrote the script for Things to Come (1936). Destination Moon (1950) picked the brains of sf writer Robert Heinlein, who’d spent part of the war designing high-altitude pressure suits, to create a preternaturally accurate forecast of the first manned mission to the moon. George Pal’s Conquest of Space, five years later, based its technology on writings and designs in Collier’s Magazine by former Nazi rocket designer Wernher von Braun. In the same year, episode 20 of the first season of Walt Disney’s Wonderful World of Colour was titled Man in Space and featured narration from Braun and his close (anti-Nazi) friend and colleague Willy Ley.

Another voice from that show, TV announcer Dick Tufeld, cropped up a few years later as voice of the robot in the hit 1965 series Lost in Space, by which time science fiction could afford to calm down, take in the scenery, and even crack a smile or two. The technocratic ideal might seem sterile now, but its promise was compelling: that we’d all live lives of ease and happiness in space, the Moon or Mars, watched over by loving machines: the Robinson family’s stalwart Robot B–9, perhaps. Once clear of the frontier, there would be few enough places for danger to lurk, though if push came to shove, the Tracy family’s spectacular Thunderbirds (1965) were sure to come and save the day. Star Trek’s pleasant suburban utopias, defended in extremis by phasers that stun more than kill, are made, for all their scale and spread, no more than village neighbourhoods thanks to the magic of personal teleportation, and all are webbed into one gentle polis by tricorders so unbelievably handy and capable, it took our best minds half a century to build them for real.

Once the danger’s over though, and the sirens are silenced -– once heaven on earth (and elsewhere) is truly established – then we hit a quite sizeable snag. Gene Roddenberry was right to have pitched Star Trek to Desilu Studios as “Wagon Train to the stars”, for as Dennis Sisterson’s charming silent parody Steam Trek: the Moving Picture (1994) demonstrates, the moment you reach California, the technology that got you there loses its specialness. The day your show’s props become merely props, is the day you’re not making science fiction any more. Forget the teleport, that rappelling rope will do. Never mind the scanner: just point.
Realism can only carry you so far. Pavel Klushantsev’s grandiloquent model-making and innovative special effects – effects that Kubrick had to discover for himself over a decade later for 2001: A Space Odyssey (1968) – put children on The Moon (1965) and ballet dancers on satellite TVs (I mean TV sets on board satellites) in Road to the Stars (1957). Such humane and intelligent gestures can only accelerate the exhaustion of “realistic” SF. You feel that exhaustion in 2001: A Space Odyssey. Indeed, the boredom and incipient madness that haunt Keir Dullea and poor, boxed-in HAL on board Discovery One are the film’s chief point: that we cannot live by reason alone. We need something more.

The trouble with Utopias is they stay still, and humanity is nothing if not restless. Two decades earlier, the formal, urban costume stylings of Gattaca (1997) and The Matrix (1999) would have appeared aspirational. In context, they’re a sign of our heroes’ imprisonment in conformist plenty.

What is this “more” we’re after, then, if reason’s not enough? At very least a light show. Ideally, redemption. Miracles. Grace. Most big- budget movies cast their alien technology as magic. Forbidden Planet (1956) owes its plot to The Tempest, spellbinding audiences with outscale animations and meticulous, hand-painted fiends from the id. The altogether more friendly water probe in James Cameron’s The Abyss took hardly less work: eight months’ team effort for 75 seconds of screen time.

Arthur Clarke, co-writer on 2001 once said: “Any sufficiently advanced technology is indistinguishable from magic.” He was half right. What’s missing from his formulation is this: sufficiently advanced technology can also resemble nature – the ordinary weave and heft of life. Andrei Tarkovsky’s Solaris (1972) and Stalker (1979) both conjure up alien presences out of forests and bare plastered rooms. Imagine how advanced their technology must be to look so ordinary!

In Alien (1979) Salvador Dali’s friend H R Giger captured this process, this vanishing into the real, half-done. Where that cadaverous Space Jockey leaves off and its ship begins is anyone’s guess. Shane Carruth’s Upstream Color (2013) adds the dimension of time to this disturbing mix, putting hapless strangers in the way of an alien lifeform that’s having to bolt together its own lifecycle day by day in greenhouses and shack laboratories.

Prometheus (2012), though late to the party, serves as an unlovely emblem to this kind of story. Its pot of black goo is pure Harry Potter: magic in a jar. Once cast upon the waters, though, it’s life itself, in all its guile and terror.

Where we have trouble spotting what’s alive and what’s not – well, that’s the most fertile territory of all. Welcome to Uncanny Valley. Population: virtually everyone in contemporary science fiction cinema. Westworld (1973) and The Stepford Wives (1975) broke the first sod, and their uncanny children have never dropped far from the tree. In the opening credits of a retrodden Battlestar Galactica (2004), Number Six sways into shot, leans over a smitten human, and utters perhaps the most devastating line in all science fiction drama: “Are you alive?” Whatever else Number Six is (actress Tricia Helfer, busting her gut to create the most devasting female robot since Brigitte Helm in Metropolis), alive she most certainly is not.
The filmmaker David Cronenberg is a regular visitor to the Valley. For twenty years, from The Brood (1979) to eXistenZ (1999), he showed us how attempts to regulate the body like a machine, while personalising technology to the point where it is wearable, can only end in elegaic and deeply melancholy body horror. Cronenberg’s visceral set dressings are one of a kind, but his wider, philosophical point crops up everywhere – even in pre-watershed confections like The Six Million Dollar Man (1974–1978) and The Bionic Woman (1976–1978), whose malfunctioning (or hyperfunctioning) bionics repeatedly confronted Steve and Jaime with the need to remember what it is to be human.

Why stay human at all, if technology promises More? In René Laloux’s Fantastic Planet (1973) the gigantic Draags lead abstract and esoteric lives, astrally projecting their consciousnesses onto distant planets to pursue strange nuptials with visiting aliens. In Pi (1998) and Requiem for a Dream (2000), Darren Aronofsky charts the epic comedown of characters who, through the somewhat injudicious application of technology, have glimpsed their own posthuman possibilities.

But this sort of technologically enabled yearning doesn’t have to end badly. There’s bawdy to be had in the miscegenation of the human and the mechanical, as when in Sleeper (1973), Miles Monroe (Woody Allen) wanders haplessly into an orgasmatron, and a 1968-vintage Barbarella (Jane Fonda) causes the evil Dr Durand-Durand’s “Excessive Machine” to explode.
For all the risks, it may be that there’s an accommodation to be made one day between the humans and the machinery. Sam Bell’s mechanical companion in Moon (2009), voiced by Kevin Spacey, may sound like 2001’s malignant HAL, but it proves more than kind in the end. In Spike Jonze’s Her (2013), Theodore’s love for his phone’s new operating system acquires a surprising depth and sincerity – not least since everyone else in the movie seems permanently latched to their smartphone screen.

“… But there’s still love!” cried Photoplay, more than eighty years ago, and Photoplay is always right. It may be that science fiction cinema will rediscover its romantic roots. (Myself, I hope so.) But it may just as easily take some other direction completely. Or disappear as a genre altogether, rather as Tarkovsky’s alien technology has melted into the spoiled landscapes of Stalker. The writer and aviator Antoine de Saint- Exupery, drunk on his airborne adventures, hit the nail on the head: “The machine does not isolate man from the great problems of nature but plunges him more deeply into them.”

You think everything is science fiction now? Just you wait.

Heading north

Reading Forecast by Joe Shute for the Telegraph, 28 June 2021

As a child, journalist Joe Shute came upon four Ladybird nature books from the early 1960s called What to Look For. They described “a world in perfect balance: weather, wildlife and people all living harmoniously as the seasons progress.”

Today, he writes, “the crisply defined seasons of my Ladybird series, neatly quartered like an apple, are these days a mush.”

Forecast is a book about phenology: the study of lifecycles, and how they are affected by season, location and other factors. Unlike behemothic “climate science”, phenology doesn’t issue big data sets or barnstorming visualisations. Its subject cannot be so easily metricised. How life responds to changes in the seasons, and changes in those changes, and changes in the rates of those changes, is a multidimensional study whose richness would be entirely lost if abstracted. Instead, phenology depends on countless parochial diaries describing changes on small patches of land.

Shute, who for more than a decade has used his own diary to fuel the “Weather Watch” column in the Daily Telegraph, can look back and see “where the weather is doing strange things and nature veering spectacularly off course.” Watching his garden coming prematurely to life in late winter, Shute is left “with a slightly sickly sensation… I started to sense not a seasonal cycle, but a spiral.” (130)

Take Shute’s diary together with countless others and tabulate the findings, and you will find that all life has started shifting northwards — insects at a rate of five metres a day, some dragonflies at between 17 and 28 metres a day.

How to write about this great migration? Immediately following several affecting and quite horrifying eye-witness scenes from the global refugee crisis, Shute writes: “The same climate crisis that is rendering swathes of the earth increasingly inhospitable and driving so many young people to their deaths, is causing a similar decline in migratory bird populations.”

I’m being unkind to make a point (in context the passage isn’t nearly so wince-making), but Shute’s not the first to discover it’s impossible to speak across all scales of the climate crisis at once.

Amitav Ghosh’s 2016 The Great Derangement is canonical here. Ghosh explained in painful detail why the traditional novel can’t handle global warming. Here, Shute seems to be proving the same point for non-fiction — or at least, for non-fiction of the meditative sort.

Why doesn’t Shute reach for abstractions? Why doesn’t he reach for climate science, and for the latest IPCC report? Why doesn’t he bloviate?

No, Shute’s made of sterner stuff: he would rather go down with his corracle, stitching together a planet on fire (11 wildfires raging in the Arctic circle in July 2018), human catastrophe, bird armageddon, his and his partner’s fertility problems, and the snore of a sleeping dormouse, across just 250 pages.

And the result? Forecast is a triumph of the most unnerving sort. By the end it’s clearly not Shute’s book that’s coming unstuck: it’s us. Shute begins his book asking “what happens to centuries of folklore, identity and memory when the very thing they subsist on is changing, perhaps for good”, and the answer he arrives at is horrific: folklore, identity and memory just vanish. There is no reverse gear to this thing.

I was delighted (if that is quite the word) to see Shute nailing the creeping unease I’ve felt every morning since 2014. That was the year the Met Office decided to give storms code-names. The reduction of our once rich, allusive weather vocabulary to “weather bombs” and “thunder snow”, as though weather events were best captured in “the sort of martial language usually preserved for the defence of the realm” is Shute’s most telling measure of how much, in this emergency, we have lost of ourselves.

Tally-ho!

Reading Sentient by Jackie Higgins for the Times, 19 June 2021

In May 1971 a young man from Portsmouth, Ian Waterman, lost all sense of his body. He wasn’t just numb. A person has a sense of the position of their body in space. In Waterman, that sense fell away, mysteriously and permanently.

Waterman, now in his seventies, has learned to operate his body rather as the rest of us operate a car. He has executive control over his movements, but no very intimate sense of what his flesh is up to.

What must this be like?

In a late chapter of her epic account of how the senses make sense, and exhibiting the kind of left-field thinking that makes for great TV documentaries, writer-director-producer Jackie Higgins goes looking for answers among the octopuses.

The octopus’s brain, you see, has no fine control over its arms. They pretty much do their own thing. They do, though, respond to the occasional high-level executive order. “Tally-ho!” cries the brain, and the arms gallop off, the brain in no more (or less) control of its transport system than a girl on a pony at a gymkhana.

Is being Ian Waterman anything like being an octopus? Attempts to imagine our way into other animals’ experiences — or other people’s experience, for that matter — have for a long time fallen under the shadow of an essay written in 1974 by American philosopher Thomas Nagel.

“What Is It Like to Be a Bat?” wasn’t about bats so much as to do with consciousness (continuity of). I can, with enough tequila inside me) imagine what it would be like for me to be a bat. But that’s not the same as knowing what’s it’s like for a bat to be a bat.

Nagel’s lesson in gloomy solipsism is all very well in philosophy. Applied to natural history, though — where even a vague notion of what a bat feels like might help a naturalist towards a moment of insight — it merely sticks the perfect in the way of the good. Every sparky natural history writer cocks a snook at poor Nagel whenever the opportunity arises.

Advances in media technology over the last twenty years (including, for birds, tiny monitor-stuffed backpacks) have deluged us in fine-grained information about how animals behave. We now have a much better idea of what (and how) they feel.

Now, you can take this sort of thing only so far. The mantis shrimp (not a shrimp; a scampi) has up to sixteen kinds of narrow-band photoreceptor, each tuned to a different wavelength of light! Humans only have three. Does this mean that the mantis shrimp enjoys better colour vision than we do?

Nope. The mantis shrimp is blind to colour, in the human sense of the word, perceiving only wavelengths. The human brain meanwhile, by processing the relative intensities of those three wavelengths of colour vision, distinguishes between millions of colours. (Some women have four colour receptors, which is why you should never argue with a woman about which curtains match the sofa.)

What about the star-nosed mole, whose octopus-like head is a mass of feelers? (Relax: it’s otherwise quite cute, and only about 2cm long.) Its weird nose is sensitive: it gathers the same amount of information about what it touches, as a regular rodent’s eye gathers about what it sees. This makes the star-nosed mole the fastest hunter we know of, identifying and capturing prey (worms) in literally less than an eyeblink.

What can such a creature tell us about our own senses? A fair bit, actually. That nose is so sensitive, the mole’s visual cortex is used the process the information. It literally sees through its nose.

But that turns out not to be so very strange: Braille readers, for example, really do read through their fingertips, harnessing their visual cortex to the task. One veteran researcher, Paul Bach-y-Rita, has been building prosthetic eyes since the 1970s, using glorified pin-art machines to (literally) impress the visual world upon his volunteers’ backs, chests, even their tongues.

From touch to sound: in the course of learning about bats, I learned here that blind people have been using echolocation for years, especially when it rains (more auditory information, you see); researchers are only now getting a measure of their abilities.

How many senses are there that we might not have noticed? Over thirty, it seems, all served by dedicated receptors, and many of them elude our consciousness entirely. (We may even share the magnetic sense enjoyed by migrating birds! But don’t get too excited. Most mammals seem to have this sense. Your pet dog almost always pees with its head facing magnetic north.)

This embarrassment of riches leaves Higgins having to decide what to include and what to leave out. There’s a cracking chapter here on how animals sense time, and some exciting details about a sense of touch common to social mammals: one that responds specifically to cuddling.

On the other hand there’s very little about our extremely rare ability to smell what we eat while we eat it. This retronasal olfaction gives us a palate unrivalled in the animal kingdom, capable of discriminating between nearly two trillion savours: and ability which has all kinds of implications for memory and behaviour.

Is this a problem? Not at all. For all that it’s stuffed with entertaining oddities, Sentient is not a book about oddities, and Higgins’s argument, though colourful, is rigorous and focused. Over 400 exhilarating pages, she leads us to adopt an entirely unfamiliar way of thinking about the senses.

Because their mechanics are fascinating and to some degree reproduceable (the human eye is, mechanically speaking, very much like a camera) we grow up thinking of the senses as mechanical outputs.

Looking at our senses this way, however, is rather like studying fungi but only looking at the pretty fruiting bodies. The real magic of fungi is their networks. And the real magic of our senses is the more than 100 billion nerve cells in each human nervous system — greater, Higgins says, than the number of stars in the Milky Way.

And that vast complexity — adapting to reflect and organise the world, not just over evolutionary time but also over the course of an individual life — gives rise to all kinds of surprises. In some humans, the ability to see with sound. In vampire bats (who can sense the location of individual veins to sink their little fangs into), the ability to detect heat using receptors that in most other mammals are used to detect acute pain.

In De Anima, the ancient philosopher Aristotle really let the side down in listing just five senses. No one expects him to have spotted exotica like cuddlesomeness and where-to-face-when-you-pee. But what about pain? What about balance? What about proprioception?

Aristotle’s restrictive and mechanistic list left him, and generations after him, with little purchase on the subject. Insights have been hard to come by.

Aristotle himself took one look at the octopus and declared it stupid.

Let’s see him driving a car with eight legs.

Variation and brilliance

Reading Barnabas Calder’s Architecture: from prehistory to climate emergency for New Scientist, 9 June 2021

For most of us, buildings are functional. We live, work, and store things in them. They are as much part of us as the nest is a part of a community of termites.

And were this all there was to say about buildings, architectural historian Barnabas Calder might have found his book easier to write. Calder wants to ask “how humanity’s access to energy has shaped the world’s buildings through history.” And had his account remained so straightforward, we might have ended up with an eye-opening mathematical description of the increase the energy available for work — derived first from wood, charcoal and straw, then from coal, then from oil — and how it first transformed, and (because of global warming) now threatens our civilisation.

And sure enough the book is full of startling statistics. (Fun fact: the charcoal equivalent of today’s cement industry would have to cover an area larger than Australia in coppiced timber.)

But of course, buildings aren’t simply functional. They’re aspirational acts of creative expression. However debased it might seem, the most ordinary structure is a work of a species of artist, and to get built at all it must be bankrolled by people who are (at least relatively) wealthy and powerful. This was as true of the buildings of Uruk (our first known city, founded in what is now Iraq around 3200 BCE) as it is of the buildings of Shenzhen (in 1980 a Chinese fishing hamlet, today a city of nearly 13 million people).

While the economics of the build environment are crucially important, then, they don’t really make sense without the sociology, and even the psychology, especially when it comes to “the mutual stirring of hysteria between architect and client” that gave us St Peter’s Basilica in the 16th century and Chengdu’s New Century Global Center (currently the world’s biggest building) in the 21st.

Calder knows this: “What different societies chose to do with [their] energy surplus has produced endless variation and brilliance,” he says. So if sometimes his account seems to wander, this is why: architecture itself is not a wholly economic activity, and certainly not a narrowly rational one.

At the end of an insightful and often impassioned journey through the history of buildings, Calder does his level best to explain how architecture can address the climate emergency. But his advices and encouragements vanish under the enormity of the crisis. The construction and running of buildings account for 39 per cent of all human greenhouse gas emissions. Concrete is the most used material on Earth after water. And while there is plenty of “sustainability” talk in the construction sector, Calder finds precious little sign of real change. We still demolish too often, and build too often, using unsustainable cement, glass and steel.

It may be that solutions are out there, but are simply invisible. The history of architecture is curiously incomplete, as Calder himself acknowledges, pointing out that “entire traditions of impressive tent-like architecture are known mainly from pictures rather than physical remnants.”

Learning to tread more lightly on the earth means exactly that: a wholly sustainable architecture wouldn’t necessarily show up in the archaeological record. The remains of pre-fossil fuel civilisations can, then, only offer us a partial guide to what our future architecture should look like.

Perhaps we should look to existing temporary structures — to refugee camps, perhaps. The idea may be distressing, but fashions change.

Calder’s long love-poem to buildings left me, rather paradoxically, thinking about the Mongols of the 13th century, for whom a walled city was a symbol of bondage and barbarism.

They would have no more settled in a fixed house than they would have submitted to slavery. And their empire, which covered 23 million square kilometres, demolished more architecture than it raised.

Nothing happens without a reason

Reading Journey to the Edge of Reason: The Life of Kurt Gödel by Stephen Budiansky for the Spectator, 29 May 2021

The 20th-century Austrian mathematician Kurt Gödel did his level best to live in the world as his philosophical hero Gottfried Wilhelm Leibnitz imagined it: a place of pre-established harmony, whose patterns are accessible to reason.

It’s an optimistic world, and a theological one: a universe presided over by a God who does not play dice. It’s most decidedly not a 20th-century world, but “in any case”, as Gödel himself once commented, “there is no reason to trust blindly in the spirit of the time.”

His fellow mathematician Paul Erdös was appalled: “You became a mathematician so that people should study you,” he complained, “not that you should study Leibnitz.” But Gödel always did prefer study to self-expression, and is this is chiefly why we know so little about him, and why the spectacular deterioration of his final years — a fantasmagoric tale of imagined conspiracies, strange vapours and shadowy intruders, ending in his self-starvation in 1978 — has come to stand for the whole of his life.

“Nothing, Gödel believed, happened without a reason,” says Stephen Burdiansky. “It was at once an affirmation of ultrarationalism, and a recipe for utter paranoia.”

You need hindsight to see the paranoia waiting to pounce. But the ultrarationalism — that was always tripping him up. There was something worryingly non-stick about him. He didn’t so much resist the spirit of the time as blunder about totally oblivious of it. He barely noticed the Anschluss, barely escaped Vienna as the Nazis assumed control, and, once ensconced at the Institute for Advanced Study at Princeton, barely credited that tragedy was even possible, or that, say, a friend might die in a concentration camp (it took three letters for his mother to convince him).

Many believed that he’d blundered, in a way typical to him, into marriage with his life-long partner, a foot-care specialist and divorcée called Adele Nimbursky. Perhaps he did. But Burdiansky does a spirited job of defending this “uneducated but determined” woman against the sneers of snobs. If anyone kept Gödel rooted to the facts of living, it was Adele. She once stuck a concrete flamingo, painted pink and black, in a flower bed right outside his study window. All evidence suggests he adored it.

Idealistic and dysfunctional, Gödel became, in mathematician Jordan Ellenberg’s phrase, “the romantic’s favourite mathematician”, a reputation cemented by the fact that we knew hardly anything about him. Key personal correspondence was destroyed at his death, while his journals and notebooks — written in Gabelsberger script, a German shorthand that had fallen into disuse by the mid-1920s — resisted all-comers until Cheryl Dawson, wife of the man tasked with sorting through Gödel’s mountain of posthumous papers — learned how to transcribe it all.

Biographer Stephen Budiansky is the first to try to give this pile of new information a human shape, and my guess is it hasn’t been easy.

Burdiansky handles the mathematics very well, capturing the air of scientific optimism that held sway over the intellectual Vienna and induced Germany’s leading mathematician David Hilbert to declare that “in mathematics there is *nothing* unknowable!”

Solving Hilbert’s four “Problems of Laying Foundations for Mathematics” of 1928 was supposed to secure the foundations of mathematics for good, and Gödel, a 22-year-old former physics student, solved one of them. Unfortunately for Hilbert and his disciples, however, Gödel also proved the insolubility of the other three. So much for the idea that all mathematics could be derived from the propositions of logic: Gödel demonstrated that logic itself was flawed.

This discovery didn’t worry Gödel nearly so much as it did his contemporaries. For Gödel, as Burdiansky explains, “Mathematical objects and a priori truth was as real to him as anything the senses could directly perceive.” If our reason failed, well, that was no reason to throw away the world: we would always be able to recognise some truths through intuition that could never be established through computation. That, for Gödel, was the whole point of being human.

It’s one thing to be a Platonist in a world dead set against Platonism, or an idealist in the world that’s gone all-in with materialism. It’s quite another to see acts of sabotage in the errors of TV listings magazines, or political conspiracy in the suicide of King Ludwig II of Bavaria. The Elysian calm and concentration afforded Gödel after the second world war at the Institute of Advanced Study probably did him more harm than good. “Gödel is too alone,” his friend Oskar Morgenstern fretted: “he should be given teaching duties; at least an hour a week.”

In the end, though, neither his friendships nor his marriage nor that ridiculous flamingo could tether to the Earth a man who had always preferred to write for his desk drawer, and Burdiansky, for all his tremendous efforts and exhaustive interrogations of Godel’s times and places, acquaintances and offices, can only leave us, at the end, with an immeasurably enriched version of Gödel the wise child. It’s an undeniably distracting and reductive picture. But — and this is the trouble — it’s not wrong.

Snowflake science

Watching Noah Hutton’s documentary In Silico for New Scientist, 19 May 2021

Shortly after he earned a neuroscience degree, young filmmaker Noah Hutton fell into the orbit of Henry Markram, an Israeli neuroscientist based at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland.

Markram models brains, axon by axon, dendrite by dendrite, in all their biological and chemical complexity. His working assumption is that the brain is an organ, and so a good enough computer model of the brain ought to reveal its workings and pathologies, just as “in silico” models of the kidneys, spleen, liver and heart have enriched our understanding of those organs.

Markram’s son Kai has autism, so Markram has skin in this game. Much as we might want to improve the condition of people like Kai, no one is going to dig about in a living human brain to see if there are handy switches we can throw. Markram hopes a computer model will offer an ethically acceptable route to understanding how brains go wrong.

So far, so reasonable. Only in 2005, Henry Markram said he would build a working computer model of the human brain in 10 years.

Hutton has interviewed Markram, his colleagues and his critics, every year for well over a decade, as the project expanded and the deadline shifted. Markram’s vision transfixed purseholders across the European Union: in 2013 his Blue Brain Project won a billion Euros of public funding to create the Human Brain Project in Geneva.

And though his tenure did not last long, Markram is hardly the first founder to be wrested from the controls of his own institute, and he won’t be the last. There have been notable departures, but his Blue Brain Project endures, still working, still modelling: its in silico model of the mouse neocortex is astounding to look at.

Perhaps that is the problem. The Human Brain Project has become, says Hutton, a special-effects house, a shrine to touch-screens, curve-screens, headsets, but lacking any meaning to anything and anyone “outside this glass and steel building in Geneva”.

We’ve heard criticisms like this before. What about the way the Large Hadron Collider at CERN sucks funding from the rest of physics? You don’t have to scratch too deeply in academia to find a disgruntled junior researcher who’ll blame CERN for their failed grant application.

CERN, however, gets results. The Human Brain Project? Not so much.

The problem is philosophical. It is certainly within our power to model some organs. The brain, however, is not an organ in the usual sense. It is, by any engineering measure, furiously inefficient. Take a look: a spike in the dentrites releases this neurotransmitter, except when it releases that neurotransmitter, except when it does nothing at all. Signals follow this route, except when they follow that route, except when they vanish. Brains may look alike, and there’s surely some commonality in their working. At the level of the axon, however, every brain behaves like a beautiful and unique snowflake.

The Blue Brain Project’s models generate noise, just like regular brains. Someone talks vaguely about “emergent properties” — an intellectual Get Out of Jail Free card if ever there was one. But since no-one knows what this noise means in a real brain, there’s no earthly way to tell if Project’s model is making the right kind of noise.

The Salk Institute’s Terrence Sejnowski reckons the whole caper’s a bad joke; if successful Markram will only generate a simulation “every bit as mysterious as the brain itself”.

Hutton accompanies us down the yawning gulf between what Markram may reasonably achieve, and the fantasies he seems quite happy to stoke in order to maintain his funding. It’s a film made on a budget of nothing, over years, and it’s not pretty. But Hutton (whose very smart sf satire Lapsis came out in the US last month) makes up for all that with the sharpest of scripts. In Silico is a labour of love, rather more productive, I fear, than Markram’s own.