How many holes has a straw?

Reading Jordan Ellenberg’s Shape for the Telegraph, 7 July 2021

“One can’t help feeling that, in those opening years of the 1900s, something was in the air,” writes mathematician Jordan Ellenburg.

It’s page 90, and he’s launching into the second act of his dramatic, complex history of geometry (think “History of the World in 100 Shapes”, some of them very screwy indeed).
For page after reassuring page, we’ve been introduced to symmetry, to topology, and to the kinds of notation that make sense of knotty-sounding questions like “how many holes has a straw”?

Now, though, the gloves are off, as Ellenburg records the fin de siecle’s “painful recognition of some unavoidable bubbling randomness at the very bottom of things.”
Normally when sentiments of this sort are trotted out, they’re there to introduce readers to the wild world of quantum mechanics (and, incidentally, we can expect a lot of that sort of thing in the next few years: there’s a centenary looming). Quantum’s got such a grip on our imagination, we tend to forget that it was the johnny-come-lately icing on an already fairly indigestible cake.

A good twenty years before physical reality was shown to be unreliable at small scales, mathematicians were pretzeling our very ideas of space. They had no choice: at the Louisiana Purchase Exposition in 1904, Henri Poincarre, by then the world’s most famous geometer, described how he was trying to keep reality stuck together in light of Maxwell’s famous equations of electromagnetism (Maxwell’s work absolutely refused to play nicely with space). In that talk, he came startlingly close to gazumping Einstein to a theory of relativity.
Also at the same exposition was Sir Ronald Ross, who had discovered that malaria was carried by the bite of the anopheles mosquito. He baffled and disappointed many with his presentation of an entirely mathematical model of disease transmission — the one we use today to predict, well, just about everything, from pandemics to political elections.
It’s hard to imagine two mathematical talks less alike than those of Poincarre and Ross. And yet they had something vital in common: both shook their audiences out of mere three-dimensional thinking.

And thank goodness for it: Ellenburg takes time to explain just how restrictive Euclidean thinking is. For Euclid, the first geometer, living in the 4th century BC, everything was geometry. When he multiplied two numbers, he thought of the result as the area of a rectangle. When he multiplied three numbers, he called the result a “solid’. Euclid’s geometric imagination gave us number theory; but tying mathematical values to physical experience locked him out of more or less everything else. Multiplying four numbers? Now how are you supposed to imagine that in three-dimensional space?

For the longest time, geometry seemed exhausted: a mental gym; sometimes a branch of rhetoric. (There’s a reason Lincoln’s Gettysburg Address characterises the United States as “dedicated to the proposition that all men are created equal”. A proposition is a Euclidean term, meaning a fact that follows logically from self-evident axioms.)

The more dimensions you add, however, the more capable and surprising geometry becomes. And this, thanks to runaway advances in our calculating ability, is why geometry has become our go-to manner of explanation for, well, everything. For games, for example: and extrapolating from games, for the sorts of algorithmical processes we saddle with that profoundly unhelpful label “artificial intelligence” (“artificial alternatives to intelligence” would be better).

All game-playing machines (from the chess player on my phone to DeepMind’s AlphaGo) share the same ghost, the “Markov chain”, formulated by Andrei Markov to map the probabilistic landscape generated by sequences of likely choices. An atheist before the Russian revolution, and treated with predictable shoddiness after it, Markov used his eponymous chain, rhetorically, to strangle religiose notions of free will in their cradle.

From isosceles triangles to free will is quite a leap, and by now you will surely have gathered that Shape is anything but a straight story. That’s the thing about mathematics: it does not advance; it proliferates. It’s the intellectual equivalent of Stephen Leacock’s Lord Ronald, who “flung himself upon his horse and rode madly off in all directions”.

Containing multitudes as he must, Ellenberg’s eyes grow wider and wider, his prose more and more energetic, as he moves from what geometry means to what geometry does in the modern world.

I mean no complaint (quite the contrary, actually) when I say that, by about two-thirds the way in, Ellenberg comes to resemble his friend John Horton Conway. Of this game-playing, toy-building celebrity of the maths world, who died from COVID last year, Ellenburg writes, “He wasn’t being wilfully difficult; it was just the way his mind worked, more associative than deductive. You asked him something and he told you what your question reminded him of.”
This is why Ellenberg took the trouble to draw out a mind map at the start of his book. This and the index offer the interested reader (and who could possibly be left indifferent?) a whole new way (“more associative than deductive”) of re-reading the book. And believe me, you will want to. Writing with passion for a nonmathematical audience, Ellenberg is a popular educator at the top of his game.

Just you wait

An essay on the machineries of science-fiction film, originally written for the BFI

Science fiction is about escape, about transcendence, about how, with the judicious application of technology, we might escape the bounds of time, space and the body.
Science fiction is not at all naive, and almost all of it is about why the dream fails: why the machine goes wrong, or works towards an unforeseen (sometimes catastrophic) end. More often than not science fiction enters clad in the motley of costume drama – so polished, so chromed, so complete. But there’s always a twist, a tear, a weak seam.

Science fiction takes what in other movies would be the set dressing, finery from the prop shop, and turns it into something vital: a god, a golem, a puzzle, a prison. In science fiction, it matters where you are, and how you dress, what you walk on and even what you breathe. All this stuff is contingent, you see. It slips about. It bites.

Sometimes, in this game of “It’s behind you!” less is more. Futuristic secret agent Lemmy Caution explores the streets of the distant space city Alphaville (1965) and the strangeness is all in Jean-Luc Godard’s cut, his dialogue, and the sharpest of sharp scripts. Alphaville, you see (only you don’t; you never do) is nothing more than a rhetorical veil cast over contemporary Paris.

More usually, you’ll grab whatever’s to hand: tinsel and Pan Stick and old gorilla costumes. Two years old by 1965, at least by Earth’s reckoning, William Hartnell’s Doctor was tearing up the set, and would, in other bodies and other voices, go on tearing up, tearing down and tearing through his fans’ expectations for the next 24 years, production values be damned. Bigger than its machinery, bigger even than its protagonist, Doctor Who (1963) was, in that first, long outing, never in any sense realistic, and that was its strength. You never knew where you’d end up next: a comedy, a horror flick, a Western-style showdown. The Doctor’s sonic screwdriver was the point: it said, We’re making this up as we go along.

So how did it all get going? Much as every other kind of film drama got going: with a woman in a tight dress. It is 1924: in a constructivist get- up that could spring from no other era, Aelita, Queen of Mars (actress and film director Yuliya Solntseva) peers into a truly otherworldly crystalline telescope and spies Earth, revolution, and Engineer Los. And Los, on being observed, begins to dream of her.

You’d think, from where we are now, deluged in testosterone from franchises like Transformers and Terminators, that such romantic comedy beginnings were an accident of science fiction’s history: a charming one-off. They’re not. They’re systemic. Thea von Harbou wrote novels about to-die-for women and her husband Fritz Lang placed them at the helm of science fiction movies like Metropolis (1927) and Frau im Mond (1929). The following year saw New York given a 1980s makeover in David Butler’s musical comedy Just Imagine. “In 1980 – people have serial numbers, not names,” explained Photoplay; “marriages are all arranged by the courts… Prohibition is still an issue… Men’s clothes have but one pocket. That’s on the hip… but there’s still love! ” (Griffith, 1972) Just Imagine boasted the most intricate setting ever created for a movie. 205 engineers and craftsmen took five months over an Oscar-nominated build costing $168,000. You still think this film is marginal? Just Imagine’s weird guns and weirder spaceships ended up reused in the serial Flash Gordon (1936).

How did we get from musical comedy to Keanu Reeves’s millennial Neo shopping in a virtual firearms mall? Well, by rocket, obviously. Science fiction got going just as our fascination with future machinery overtook our fascination with future fashion. Lang wanted a real rocket launch for the premiere of Frau im Mond and roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre tall liquid- propellant rocket came to nought, Oberth set about building one eleven metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily they ran out of money.

What hostile critics say is true: for a while, science fiction did become more about the machines than about the people. This was a necessary excursion, and an entertaining one: to explore the technocratic future ushered in by the New York World’s Fair of 1939–1940 and realised, one countdown after another, in the world war and cold war to come. (Science fiction is always, ultimately, about the present.) HG Wells wrote the script for Things to Come (1936). Destination Moon (1950) picked the brains of sf writer Robert Heinlein, who’d spent part of the war designing high-altitude pressure suits, to create a preternaturally accurate forecast of the first manned mission to the moon. George Pal’s Conquest of Space, five years later, based its technology on writings and designs in Collier’s Magazine by former Nazi rocket designer Wernher von Braun. In the same year, episode 20 of the first season of Walt Disney’s Wonderful World of Colour was titled Man in Space and featured narration from Braun and his close (anti-Nazi) friend and colleague Willy Ley.

Another voice from that show, TV announcer Dick Tufeld, cropped up a few years later as voice of the robot in the hit 1965 series Lost in Space, by which time science fiction could afford to calm down, take in the scenery, and even crack a smile or two. The technocratic ideal might seem sterile now, but its promise was compelling: that we’d all live lives of ease and happiness in space, the Moon or Mars, watched over by loving machines: the Robinson family’s stalwart Robot B–9, perhaps. Once clear of the frontier, there would be few enough places for danger to lurk, though if push came to shove, the Tracy family’s spectacular Thunderbirds (1965) were sure to come and save the day. Star Trek’s pleasant suburban utopias, defended in extremis by phasers that stun more than kill, are made, for all their scale and spread, no more than village neighbourhoods thanks to the magic of personal teleportation, and all are webbed into one gentle polis by tricorders so unbelievably handy and capable, it took our best minds half a century to build them for real.

Once the danger’s over though, and the sirens are silenced -– once heaven on earth (and elsewhere) is truly established – then we hit a quite sizeable snag. Gene Roddenberry was right to have pitched Star Trek to Desilu Studios as “Wagon Train to the stars”, for as Dennis Sisterson’s charming silent parody Steam Trek: the Moving Picture (1994) demonstrates, the moment you reach California, the technology that got you there loses its specialness. The day your show’s props become merely props, is the day you’re not making science fiction any more. Forget the teleport, that rappelling rope will do. Never mind the scanner: just point.
Realism can only carry you so far. Pavel Klushantsev’s grandiloquent model-making and innovative special effects – effects that Kubrick had to discover for himself over a decade later for 2001: A Space Odyssey (1968) – put children on The Moon (1965) and ballet dancers on satellite TVs (I mean TV sets on board satellites) in Road to the Stars (1957). Such humane and intelligent gestures can only accelerate the exhaustion of “realistic” SF. You feel that exhaustion in 2001: A Space Odyssey. Indeed, the boredom and incipient madness that haunt Keir Dullea and poor, boxed-in HAL on board Discovery One are the film’s chief point: that we cannot live by reason alone. We need something more.

The trouble with Utopias is they stay still, and humanity is nothing if not restless. Two decades earlier, the formal, urban costume stylings of Gattaca (1997) and The Matrix (1999) would have appeared aspirational. In context, they’re a sign of our heroes’ imprisonment in conformist plenty.

What is this “more” we’re after, then, if reason’s not enough? At very least a light show. Ideally, redemption. Miracles. Grace. Most big- budget movies cast their alien technology as magic. Forbidden Planet (1956) owes its plot to The Tempest, spellbinding audiences with outscale animations and meticulous, hand-painted fiends from the id. The altogether more friendly water probe in James Cameron’s The Abyss took hardly less work: eight months’ team effort for 75 seconds of screen time.

Arthur Clarke, co-writer on 2001 once said: “Any sufficiently advanced technology is indistinguishable from magic.” He was half right. What’s missing from his formulation is this: sufficiently advanced technology can also resemble nature – the ordinary weave and heft of life. Andrei Tarkovsky’s Solaris (1972) and Stalker (1979) both conjure up alien presences out of forests and bare plastered rooms. Imagine how advanced their technology must be to look so ordinary!

In Alien (1979) Salvador Dali’s friend H R Giger captured this process, this vanishing into the real, half-done. Where that cadaverous Space Jockey leaves off and its ship begins is anyone’s guess. Shane Carruth’s Upstream Color (2013) adds the dimension of time to this disturbing mix, putting hapless strangers in the way of an alien lifeform that’s having to bolt together its own lifecycle day by day in greenhouses and shack laboratories.

Prometheus (2012), though late to the party, serves as an unlovely emblem to this kind of story. Its pot of black goo is pure Harry Potter: magic in a jar. Once cast upon the waters, though, it’s life itself, in all its guile and terror.

Where we have trouble spotting what’s alive and what’s not – well, that’s the most fertile territory of all. Welcome to Uncanny Valley. Population: virtually everyone in contemporary science fiction cinema. Westworld (1973) and The Stepford Wives (1975) broke the first sod, and their uncanny children have never dropped far from the tree. In the opening credits of a retrodden Battlestar Galactica (2004), Number Six sways into shot, leans over a smitten human, and utters perhaps the most devastating line in all science fiction drama: “Are you alive?” Whatever else Number Six is (actress Tricia Helfer, busting her gut to create the most devasting female robot since Brigitte Helm in Metropolis), alive she most certainly is not.
The filmmaker David Cronenberg is a regular visitor to the Valley. For twenty years, from The Brood (1979) to eXistenZ (1999), he showed us how attempts to regulate the body like a machine, while personalising technology to the point where it is wearable, can only end in elegaic and deeply melancholy body horror. Cronenberg’s visceral set dressings are one of a kind, but his wider, philosophical point crops up everywhere – even in pre-watershed confections like The Six Million Dollar Man (1974–1978) and The Bionic Woman (1976–1978), whose malfunctioning (or hyperfunctioning) bionics repeatedly confronted Steve and Jaime with the need to remember what it is to be human.

Why stay human at all, if technology promises More? In René Laloux’s Fantastic Planet (1973) the gigantic Draags lead abstract and esoteric lives, astrally projecting their consciousnesses onto distant planets to pursue strange nuptials with visiting aliens. In Pi (1998) and Requiem for a Dream (2000), Darren Aronofsky charts the epic comedown of characters who, through the somewhat injudicious application of technology, have glimpsed their own posthuman possibilities.

But this sort of technologically enabled yearning doesn’t have to end badly. There’s bawdy to be had in the miscegenation of the human and the mechanical, as when in Sleeper (1973), Miles Monroe (Woody Allen) wanders haplessly into an orgasmatron, and a 1968-vintage Barbarella (Jane Fonda) causes the evil Dr Durand-Durand’s “Excessive Machine” to explode.
For all the risks, it may be that there’s an accommodation to be made one day between the humans and the machinery. Sam Bell’s mechanical companion in Moon (2009), voiced by Kevin Spacey, may sound like 2001’s malignant HAL, but it proves more than kind in the end. In Spike Jonze’s Her (2013), Theodore’s love for his phone’s new operating system acquires a surprising depth and sincerity – not least since everyone else in the movie seems permanently latched to their smartphone screen.

“… But there’s still love!” cried Photoplay, more than eighty years ago, and Photoplay is always right. It may be that science fiction cinema will rediscover its romantic roots. (Myself, I hope so.) But it may just as easily take some other direction completely. Or disappear as a genre altogether, rather as Tarkovsky’s alien technology has melted into the spoiled landscapes of Stalker. The writer and aviator Antoine de Saint- Exupery, drunk on his airborne adventures, hit the nail on the head: “The machine does not isolate man from the great problems of nature but plunges him more deeply into them.”

You think everything is science fiction now? Just you wait.

Heading north

Reading Forecast by Joe Shute for the Telegraph, 28 June 2021

As a child, journalist Joe Shute came upon four Ladybird nature books from the early 1960s called What to Look For. They described “a world in perfect balance: weather, wildlife and people all living harmoniously as the seasons progress.”

Today, he writes, “the crisply defined seasons of my Ladybird series, neatly quartered like an apple, are these days a mush.”

Forecast is a book about phenology: the study of lifecycles, and how they are affected by season, location and other factors. Unlike behemothic “climate science”, phenology doesn’t issue big data sets or barnstorming visualisations. Its subject cannot be so easily metricised. How life responds to changes in the seasons, and changes in those changes, and changes in the rates of those changes, is a multidimensional study whose richness would be entirely lost if abstracted. Instead, phenology depends on countless parochial diaries describing changes on small patches of land.

Shute, who for more than a decade has used his own diary to fuel the “Weather Watch” column in the Daily Telegraph, can look back and see “where the weather is doing strange things and nature veering spectacularly off course.” Watching his garden coming prematurely to life in late winter, Shute is left “with a slightly sickly sensation… I started to sense not a seasonal cycle, but a spiral.” (130)

Take Shute’s diary together with countless others and tabulate the findings, and you will find that all life has started shifting northwards — insects at a rate of five metres a day, some dragonflies at between 17 and 28 metres a day.

How to write about this great migration? Immediately following several affecting and quite horrifying eye-witness scenes from the global refugee crisis, Shute writes: “The same climate crisis that is rendering swathes of the earth increasingly inhospitable and driving so many young people to their deaths, is causing a similar decline in migratory bird populations.”

I’m being unkind to make a point (in context the passage isn’t nearly so wince-making), but Shute’s not the first to discover it’s impossible to speak across all scales of the climate crisis at once.

Amitav Ghosh’s 2016 The Great Derangement is canonical here. Ghosh explained in painful detail why the traditional novel can’t handle global warming. Here, Shute seems to be proving the same point for non-fiction — or at least, for non-fiction of the meditative sort.

Why doesn’t Shute reach for abstractions? Why doesn’t he reach for climate science, and for the latest IPCC report? Why doesn’t he bloviate?

No, Shute’s made of sterner stuff: he would rather go down with his corracle, stitching together a planet on fire (11 wildfires raging in the Arctic circle in July 2018), human catastrophe, bird armageddon, his and his partner’s fertility problems, and the snore of a sleeping dormouse, across just 250 pages.

And the result? Forecast is a triumph of the most unnerving sort. By the end it’s clearly not Shute’s book that’s coming unstuck: it’s us. Shute begins his book asking “what happens to centuries of folklore, identity and memory when the very thing they subsist on is changing, perhaps for good”, and the answer he arrives at is horrific: folklore, identity and memory just vanish. There is no reverse gear to this thing.

I was delighted (if that is quite the word) to see Shute nailing the creeping unease I’ve felt every morning since 2014. That was the year the Met Office decided to give storms code-names. The reduction of our once rich, allusive weather vocabulary to “weather bombs” and “thunder snow”, as though weather events were best captured in “the sort of martial language usually preserved for the defence of the realm” is Shute’s most telling measure of how much, in this emergency, we have lost of ourselves.

Tally-ho!

Reading Sentient by Jackie Higgins for the Times, 19 June 2021

In May 1971 a young man from Portsmouth, Ian Waterman, lost all sense of his body. He wasn’t just numb. A person has a sense of the position of their body in space. In Waterman, that sense fell away, mysteriously and permanently.

Waterman, now in his seventies, has learned to operate his body rather as the rest of us operate a car. He has executive control over his movements, but no very intimate sense of what his flesh is up to.

What must this be like?

In a late chapter of her epic account of how the senses make sense, and exhibiting the kind of left-field thinking that makes for great TV documentaries, writer-director-producer Jackie Higgins goes looking for answers among the octopuses.

The octopus’s brain, you see, has no fine control over its arms. They pretty much do their own thing. They do, though, respond to the occasional high-level executive order. “Tally-ho!” cries the brain, and the arms gallop off, the brain in no more (or less) control of its transport system than a girl on a pony at a gymkhana.

Is being Ian Waterman anything like being an octopus? Attempts to imagine our way into other animals’ experiences — or other people’s experience, for that matter — have for a long time fallen under the shadow of an essay written in 1974 by American philosopher Thomas Nagel.

“What Is It Like to Be a Bat?” wasn’t about bats so much as to do with consciousness (continuity of). I can, with enough tequila inside me) imagine what it would be like for me to be a bat. But that’s not the same as knowing what’s it’s like for a bat to be a bat.

Nagel’s lesson in gloomy solipsism is all very well in philosophy. Applied to natural history, though — where even a vague notion of what a bat feels like might help a naturalist towards a moment of insight — it merely sticks the perfect in the way of the good. Every sparky natural history writer cocks a snook at poor Nagel whenever the opportunity arises.

Advances in media technology over the last twenty years (including, for birds, tiny monitor-stuffed backpacks) have deluged us in fine-grained information about how animals behave. We now have a much better idea of what (and how) they feel.

Now, you can take this sort of thing only so far. The mantis shrimp (not a shrimp; a scampi) has up to sixteen kinds of narrow-band photoreceptor, each tuned to a different wavelength of light! Humans only have three. Does this mean that the mantis shrimp enjoys better colour vision than we do?

Nope. The mantis shrimp is blind to colour, in the human sense of the word, perceiving only wavelengths. The human brain meanwhile, by processing the relative intensities of those three wavelengths of colour vision, distinguishes between millions of colours. (Some women have four colour receptors, which is why you should never argue with a woman about which curtains match the sofa.)

What about the star-nosed mole, whose octopus-like head is a mass of feelers? (Relax: it’s otherwise quite cute, and only about 2cm long.) Its weird nose is sensitive: it gathers the same amount of information about what it touches, as a regular rodent’s eye gathers about what it sees. This makes the star-nosed mole the fastest hunter we know of, identifying and capturing prey (worms) in literally less than an eyeblink.

What can such a creature tell us about our own senses? A fair bit, actually. That nose is so sensitive, the mole’s visual cortex is used the process the information. It literally sees through its nose.

But that turns out not to be so very strange: Braille readers, for example, really do read through their fingertips, harnessing their visual cortex to the task. One veteran researcher, Paul Bach-y-Rita, has been building prosthetic eyes since the 1970s, using glorified pin-art machines to (literally) impress the visual world upon his volunteers’ backs, chests, even their tongues.

From touch to sound: in the course of learning about bats, I learned here that blind people have been using echolocation for years, especially when it rains (more auditory information, you see); researchers are only now getting a measure of their abilities.

How many senses are there that we might not have noticed? Over thirty, it seems, all served by dedicated receptors, and many of them elude our consciousness entirely. (We may even share the magnetic sense enjoyed by migrating birds! But don’t get too excited. Most mammals seem to have this sense. Your pet dog almost always pees with its head facing magnetic north.)

This embarrassment of riches leaves Higgins having to decide what to include and what to leave out. There’s a cracking chapter here on how animals sense time, and some exciting details about a sense of touch common to social mammals: one that responds specifically to cuddling.

On the other hand there’s very little about our extremely rare ability to smell what we eat while we eat it. This retronasal olfaction gives us a palate unrivalled in the animal kingdom, capable of discriminating between nearly two trillion savours: and ability which has all kinds of implications for memory and behaviour.

Is this a problem? Not at all. For all that it’s stuffed with entertaining oddities, Sentient is not a book about oddities, and Higgins’s argument, though colourful, is rigorous and focused. Over 400 exhilarating pages, she leads us to adopt an entirely unfamiliar way of thinking about the senses.

Because their mechanics are fascinating and to some degree reproduceable (the human eye is, mechanically speaking, very much like a camera) we grow up thinking of the senses as mechanical outputs.

Looking at our senses this way, however, is rather like studying fungi but only looking at the pretty fruiting bodies. The real magic of fungi is their networks. And the real magic of our senses is the more than 100 billion nerve cells in each human nervous system — greater, Higgins says, than the number of stars in the Milky Way.

And that vast complexity — adapting to reflect and organise the world, not just over evolutionary time but also over the course of an individual life — gives rise to all kinds of surprises. In some humans, the ability to see with sound. In vampire bats (who can sense the location of individual veins to sink their little fangs into), the ability to detect heat using receptors that in most other mammals are used to detect acute pain.

In De Anima, the ancient philosopher Aristotle really let the side down in listing just five senses. No one expects him to have spotted exotica like cuddlesomeness and where-to-face-when-you-pee. But what about pain? What about balance? What about proprioception?

Aristotle’s restrictive and mechanistic list left him, and generations after him, with little purchase on the subject. Insights have been hard to come by.

Aristotle himself took one look at the octopus and declared it stupid.

Let’s see him driving a car with eight legs.

Variation and brilliance

Reading Barnabas Calder’s Architecture: from prehistory to climate emergency for New Scientist, 9 June 2021

For most of us, buildings are functional. We live, work, and store things in them. They are as much part of us as the nest is a part of a community of termites.

And were this all there was to say about buildings, architectural historian Barnabas Calder might have found his book easier to write. Calder wants to ask “how humanity’s access to energy has shaped the world’s buildings through history.” And had his account remained so straightforward, we might have ended up with an eye-opening mathematical description of the increase the energy available for work — derived first from wood, charcoal and straw, then from coal, then from oil — and how it first transformed, and (because of global warming) now threatens our civilisation.

And sure enough the book is full of startling statistics. (Fun fact: the charcoal equivalent of today’s cement industry would have to cover an area larger than Australia in coppiced timber.)

But of course, buildings aren’t simply functional. They’re aspirational acts of creative expression. However debased it might seem, the most ordinary structure is a work of a species of artist, and to get built at all it must be bankrolled by people who are (at least relatively) wealthy and powerful. This was as true of the buildings of Uruk (our first known city, founded in what is now Iraq around 3200 BCE) as it is of the buildings of Shenzhen (in 1980 a Chinese fishing hamlet, today a city of nearly 13 million people).

While the economics of the build environment are crucially important, then, they don’t really make sense without the sociology, and even the psychology, especially when it comes to “the mutual stirring of hysteria between architect and client” that gave us St Peter’s Basilica in the 16th century and Chengdu’s New Century Global Center (currently the world’s biggest building) in the 21st.

Calder knows this: “What different societies chose to do with [their] energy surplus has produced endless variation and brilliance,” he says. So if sometimes his account seems to wander, this is why: architecture itself is not a wholly economic activity, and certainly not a narrowly rational one.

At the end of an insightful and often impassioned journey through the history of buildings, Calder does his level best to explain how architecture can address the climate emergency. But his advices and encouragements vanish under the enormity of the crisis. The construction and running of buildings account for 39 per cent of all human greenhouse gas emissions. Concrete is the most used material on Earth after water. And while there is plenty of “sustainability” talk in the construction sector, Calder finds precious little sign of real change. We still demolish too often, and build too often, using unsustainable cement, glass and steel.

It may be that solutions are out there, but are simply invisible. The history of architecture is curiously incomplete, as Calder himself acknowledges, pointing out that “entire traditions of impressive tent-like architecture are known mainly from pictures rather than physical remnants.”

Learning to tread more lightly on the earth means exactly that: a wholly sustainable architecture wouldn’t necessarily show up in the archaeological record. The remains of pre-fossil fuel civilisations can, then, only offer us a partial guide to what our future architecture should look like.

Perhaps we should look to existing temporary structures — to refugee camps, perhaps. The idea may be distressing, but fashions change.

Calder’s long love-poem to buildings left me, rather paradoxically, thinking about the Mongols of the 13th century, for whom a walled city was a symbol of bondage and barbarism.

They would have no more settled in a fixed house than they would have submitted to slavery. And their empire, which covered 23 million square kilometres, demolished more architecture than it raised.

Nothing happens without a reason

Reading Journey to the Edge of Reason: The Life of Kurt Gödel by Stephen Budiansky for the Spectator, 29 May 2021

The 20th-century Austrian mathematician Kurt Gödel did his level best to live in the world as his philosophical hero Gottfried Wilhelm Leibnitz imagined it: a place of pre-established harmony, whose patterns are accessible to reason.

It’s an optimistic world, and a theological one: a universe presided over by a God who does not play dice. It’s most decidedly not a 20th-century world, but “in any case”, as Gödel himself once commented, “there is no reason to trust blindly in the spirit of the time.”

His fellow mathematician Paul Erdös was appalled: “You became a mathematician so that people should study you,” he complained, “not that you should study Leibnitz.” But Gödel always did prefer study to self-expression, and is this is chiefly why we know so little about him, and why the spectacular deterioration of his final years — a fantasmagoric tale of imagined conspiracies, strange vapours and shadowy intruders, ending in his self-starvation in 1978 — has come to stand for the whole of his life.

“Nothing, Gödel believed, happened without a reason,” says Stephen Burdiansky. “It was at once an affirmation of ultrarationalism, and a recipe for utter paranoia.”

You need hindsight to see the paranoia waiting to pounce. But the ultrarationalism — that was always tripping him up. There was something worryingly non-stick about him. He didn’t so much resist the spirit of the time as blunder about totally oblivious of it. He barely noticed the Anschluss, barely escaped Vienna as the Nazis assumed control, and, once ensconced at the Institute for Advanced Study at Princeton, barely credited that tragedy was even possible, or that, say, a friend might die in a concentration camp (it took three letters for his mother to convince him).

Many believed that he’d blundered, in a way typical to him, into marriage with his life-long partner, a foot-care specialist and divorcée called Adele Nimbursky. Perhaps he did. But Burdiansky does a spirited job of defending this “uneducated but determined” woman against the sneers of snobs. If anyone kept Gödel rooted to the facts of living, it was Adele. She once stuck a concrete flamingo, painted pink and black, in a flower bed right outside his study window. All evidence suggests he adored it.

Idealistic and dysfunctional, Gödel became, in mathematician Jordan Ellenberg’s phrase, “the romantic’s favourite mathematician”, a reputation cemented by the fact that we knew hardly anything about him. Key personal correspondence was destroyed at his death, while his journals and notebooks — written in Gabelsberger script, a German shorthand that had fallen into disuse by the mid-1920s — resisted all-comers until Cheryl Dawson, wife of the man tasked with sorting through Gödel’s mountain of posthumous papers — learned how to transcribe it all.

Biographer Stephen Budiansky is the first to try to give this pile of new information a human shape, and my guess is it hasn’t been easy.

Burdiansky handles the mathematics very well, capturing the air of scientific optimism that held sway over the intellectual Vienna and induced Germany’s leading mathematician David Hilbert to declare that “in mathematics there is *nothing* unknowable!”

Solving Hilbert’s four “Problems of Laying Foundations for Mathematics” of 1928 was supposed to secure the foundations of mathematics for good, and Gödel, a 22-year-old former physics student, solved one of them. Unfortunately for Hilbert and his disciples, however, Gödel also proved the insolubility of the other three. So much for the idea that all mathematics could be derived from the propositions of logic: Gödel demonstrated that logic itself was flawed.

This discovery didn’t worry Gödel nearly so much as it did his contemporaries. For Gödel, as Burdiansky explains, “Mathematical objects and a priori truth was as real to him as anything the senses could directly perceive.” If our reason failed, well, that was no reason to throw away the world: we would always be able to recognise some truths through intuition that could never be established through computation. That, for Gödel, was the whole point of being human.

It’s one thing to be a Platonist in a world dead set against Platonism, or an idealist in the world that’s gone all-in with materialism. It’s quite another to see acts of sabotage in the errors of TV listings magazines, or political conspiracy in the suicide of King Ludwig II of Bavaria. The Elysian calm and concentration afforded Gödel after the second world war at the Institute of Advanced Study probably did him more harm than good. “Gödel is too alone,” his friend Oskar Morgenstern fretted: “he should be given teaching duties; at least an hour a week.”

In the end, though, neither his friendships nor his marriage nor that ridiculous flamingo could tether to the Earth a man who had always preferred to write for his desk drawer, and Burdiansky, for all his tremendous efforts and exhaustive interrogations of Godel’s times and places, acquaintances and offices, can only leave us, at the end, with an immeasurably enriched version of Gödel the wise child. It’s an undeniably distracting and reductive picture. But — and this is the trouble — it’s not wrong.

Snowflake science

Watching Noah Hutton’s documentary In Silico for New Scientist, 19 May 2021

Shortly after he earned a neuroscience degree, young filmmaker Noah Hutton fell into the orbit of Henry Markram, an Israeli neuroscientist based at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland.

Markram models brains, axon by axon, dendrite by dendrite, in all their biological and chemical complexity. His working assumption is that the brain is an organ, and so a good enough computer model of the brain ought to reveal its workings and pathologies, just as “in silico” models of the kidneys, spleen, liver and heart have enriched our understanding of those organs.

Markram’s son Kai has autism, so Markram has skin in this game. Much as we might want to improve the condition of people like Kai, no one is going to dig about in a living human brain to see if there are handy switches we can throw. Markram hopes a computer model will offer an ethically acceptable route to understanding how brains go wrong.

So far, so reasonable. Only in 2005, Henry Markram said he would build a working computer model of the human brain in 10 years.

Hutton has interviewed Markram, his colleagues and his critics, every year for well over a decade, as the project expanded and the deadline shifted. Markram’s vision transfixed purseholders across the European Union: in 2013 his Blue Brain Project won a billion Euros of public funding to create the Human Brain Project in Geneva.

And though his tenure did not last long, Markram is hardly the first founder to be wrested from the controls of his own institute, and he won’t be the last. There have been notable departures, but his Blue Brain Project endures, still working, still modelling: its in silico model of the mouse neocortex is astounding to look at.

Perhaps that is the problem. The Human Brain Project has become, says Hutton, a special-effects house, a shrine to touch-screens, curve-screens, headsets, but lacking any meaning to anything and anyone “outside this glass and steel building in Geneva”.

We’ve heard criticisms like this before. What about the way the Large Hadron Collider at CERN sucks funding from the rest of physics? You don’t have to scratch too deeply in academia to find a disgruntled junior researcher who’ll blame CERN for their failed grant application.

CERN, however, gets results. The Human Brain Project? Not so much.

The problem is philosophical. It is certainly within our power to model some organs. The brain, however, is not an organ in the usual sense. It is, by any engineering measure, furiously inefficient. Take a look: a spike in the dentrites releases this neurotransmitter, except when it releases that neurotransmitter, except when it does nothing at all. Signals follow this route, except when they follow that route, except when they vanish. Brains may look alike, and there’s surely some commonality in their working. At the level of the axon, however, every brain behaves like a beautiful and unique snowflake.

The Blue Brain Project’s models generate noise, just like regular brains. Someone talks vaguely about “emergent properties” — an intellectual Get Out of Jail Free card if ever there was one. But since no-one knows what this noise means in a real brain, there’s no earthly way to tell if Project’s model is making the right kind of noise.

The Salk Institute’s Terrence Sejnowski reckons the whole caper’s a bad joke; if successful Markram will only generate a simulation “every bit as mysterious as the brain itself”.

Hutton accompanies us down the yawning gulf between what Markram may reasonably achieve, and the fantasies he seems quite happy to stoke in order to maintain his funding. It’s a film made on a budget of nothing, over years, and it’s not pretty. But Hutton (whose very smart sf satire Lapsis came out in the US last month) makes up for all that with the sharpest of scripts. In Silico is a labour of love, rather more productive, I fear, than Markram’s own.

Waiting for the End of the End of the World

Watching the 2021 European Media Arts Festival on-line for New Scientist, 19 May 2021

For over forty years, the European Media Art Festival in Osnabrueck has offered attendees a glimpse of the best short films coming on-line and to festivals over the coming year. It’s been a reliable cultural barometer, too, revealing, through film, some of our deepest social anxieties and preoccupations. This year saw science fiction swallowing the festival whole.

It’s as though the genre were becoming, not just a valid way to talk about the present, but the only way.

This was the quite explicit message of the audiovisual presentation Planet City and the Return of Global Wilderness  by London-trained, LA-based architect Liam Young, much of whose work is speculative — not to say downright science-fictional. Part of Young’s presentation was a retrospective of a career spent exploring global infrastructures, “an unevenly-distributed megastructure that hides in plain sight… slowly stitched together from stolen lands by planetary logistics.”

Forming a powerful contrast with his past travels — through container shipping, the garment supply chain, lithium mining and other real-world adventures — Planet City also featured a utopian future in which humanity sagely withdraws “into one hyper-dense metropolis housing the entire population of the Earth”.

It’s the impossibility of this utopia that’s Young’s point. Science fiction used to be full of such utopian possibilities. These days, however, it has become, Young says, just our favourite way of explaining to ourselves, over and over, the disasters engulfing us and our planet. The once hopeful genre of science fiction cedes ground to dystopia, leaving us “stranded in the long now… waiting for the end of the End of the World”.

We’ve confronted the End of the World before, of course. Marian Mayland’s film essay Michael Ironside and I  weaves between three imaginary rooms, assembled from still and short clips from three iconic science fiction films. The rooms are uninhabited, cluttered, uncanny, and cut together to create an imaginary habitation connected to the outside world via shafts and closet doors. War Games’s bedroom in a suburban family house (1983), Real Genius’s California campus dorm room (1985) and the bowels of Sea Quest DSV’s futuristic nuclear submarine (1993) fold into each other to create a poignant fictional 1990s childhood, capturing the effects of Cold War thinking on a generation of geeky male adolescents.

Mayland’s film, which won a German film critics’ award at the festival, is exactly the sort of work — moving between film and performance, document and experiment — that the festival has been championing for over forty years.

Other science-fictional experiments included Josh Weissbach’s A Landscape to be Invented, a collage of wobbly 16mm and Super 8 footage set to excerpts of audiobook sci-fi from the likes of Kim Stanley Robinson and Cixin Liu. It’s a kind of “how to” manual for terraforming a distant world, only this world is not verdant, but violet, not green but purple, as Weissbach passes his footage through a digital, faux-ultraviolet filter.

Zachary Epcar’s more obviously satirical The Canyon sees the calm pace of life in a sunny waterside housing estate turn increasingly strange, as the blissed-out, evesdropped lines of the inhabitants (“Sometimes I come to in the glassware aisle, and I don’t know how I got there”) give way to the meaningless electronic gabble and vibration of phones, toothbrushes and keyfobs.

If this all sounds rather grim, rather unsmiling, even rather hopeless — well, I don’t think the selection, or even the works themselves, were to blame. I think Young is right and the problem lies in science fiction itself: that it’s ceased to be a playground, and has become instead a deadly serious way of explaining increasingly interconnected and technological world. And that’s fine. That’s science fiction growing up.

But what the artist-filmmakers of EMAF have yet to find, is some other way — less technocratic, perhaps, and more political, more spiritual — for imagining a better future.

Life at all costs

Reading The Next 500 Years by Chris Mason for New Scientist, 12 May 2021

Humanity’s long-term prospects don’t look good. If we don’t all kill each other with nuclear weapons, that overdue planet-killing asteroid can’t be too far off; anyway, the Sun itself will (eventually) explode, obliterating all trace of life in our planetary system.

As if awareness of our own mortality hasn’t given us enough to fret about, we are also capable of imagining our own species’ extinction. Once we do that, though, are we not ethically bound to do something about it?

Cornell geneticist Chris Mason thinks so. “Engineering,” he writes, “is humanity’s innate duty, needed to ensure the survival of life.” And not just human life; Mason is out to ensure the cosmic future of all life, including species that are currently extinct.

Mason is not the first to think this way, but he arrives at a fascinating moment in the history of technology, when we may, after all be able to avoid some previously unavoidable catastrophes.

Mason’s 500-year plan for our future involves reengineering human and other genomes so that we can tolerate the (to us) extreme environments of other worlds. Our ultimate goal, Mason says, should be to settle new solar systems.

Spreading humanity to the stars would hedge our bets nicely, only we currently lack the tools to survive the trip, never mind the stay. That’s where Mason comes in. He was principal investigator on NASA’s Twins Study, begun in 2015: a foundational investigation into the health of identical twins Scott Kelly and Mark Kelly during the 340 days Scott was in space and Mark was on Earth.

Mason explains how the Twins Study informed NASA’s burgeoning understanding of the human biome, how a programme once narrowly focused on human genetics now extends to embrace bacteria and viruses, and how new genetic engineering tools like CRISPR and its hopeful successors may enable us to address the risks of spaceflight (exposure to cosmic radiation radiation is considered the most serious) and protect the health of settlers on the Moon, on Mars, and even, one day, on Saturn’s moon Titan.

Outside his specialism, Mason has some fun (a photosythesizing human would need skin flaps the size of two tennis courts — so now you know) then flounders slightly, reaching for familiar narratives to hold his sprawling vision together. More informed readers may start to lose interest in the later chapters. The role of spectroscopy in the detection of exoplanets is certainly relevant, but in a work of this gargantuan scope, I wonder if it needed rehearsing. And will readers of a book like this really need reminding of Frank’s Drake equation (regarding the likelihood of extra-terrestrial civilisations)?

Uneven as it is, Mason’s book is a genuine, timely, and very personable addition to a 1,000-year-old Western tradition, grounded in religious expectations and a quest for transcendence and salvation. Visionaries from Isaac Newton to Joseph Priestley to Russian space pioneer Konstantin Tsiolkowsky have spouted the very tenets that underpin Mason’s account: that the apocalypse is imminent; and that, by increasing human knowledge, we may recover the Paradise we enjoyed before the Flood.

Masonic beliefs follow the same pattern; significantly, many famous NASA astronauts, including John Glenn, Buzz Aldrin and Gordo Cooper, were Freemasons.

Mason puts a new layer of flesh on what have, so far, been some ardent but very sketchy dreams. And, though a proud child of his engineering culture, he is no dupe. He understands and explores all the major risks associated with genetic tinkering, and entertains all the most pertinent counter-arguments. He knows where 19th-century eugenics led. He knows the value of biological and neurological diversity. He’s not Frankenstein. His deepest hope is not that his plans are realised in any recognisable form; but that we continue to make plans, test them and remake them, for the sake of all life.