If this is Wednesday then this must be Thai red curry with prawns

Reading Dan Saladino’s Eating to Extinction for the Telegraph, 26 September 2021

Within five minutes of my desk: an Italian delicatessen, a Vietnamese pho house, a pizzeria, two Chinese, a Thai, and an Indian “with a contemporary twist” (don’t knock it till you’ve tried it). Can such bounty be extended over the Earth?

Yes, it can. It’s already happening. And in what amounts to a distillation of a life’s work writing about food, and sporting a few predictable limitations (he’s a journalist; he puts stories in logical order, imagining this makes an argument) Dan Saladino’s Eating to Extinction explains just what price we’ll pay for this extraordinary achievement which promises, not only to end world hunger by 2030 (a much-touted UN goal), but to make California rolls available everywhere from to Kamchatka to Karachi.

The problem with my varied diet (if this is Wednesday then this must be Thai red curry with prawns) is that it’s also your varied diet, and your neighbour’s; it’s rapidly becoming the same varied diet across the whole world. You think your experience of world cuisine reflects global diversity? Humanity used to sustain itself (admittedly, not too well) on 6,000 species of plant. Now, for over three quarters of our calories, we gorge on just nine: rice, wheat and maize, potato, barley, palm oil and soy, sugar from beets and sugar from cane. The same narrowing can be found in our consumption of animals and seafood. What looks to us like the world on a plate is in fact the sum total of what’s available world-wide, now that we’ve learned to grow ever greater quantities of ever fewer foods.

Saladino is in the anecdote business; he travels the Earth to meet his pantheon of food heroes, each of whom is seen saving a rare food for our table – a red pea, a goaty cheese, a flat oyster. So far, so very Sunday supplement. Nor is there anything to snipe at in the adventures of, say, Woldemar Mammel who, searching in the attics of old farmhouses and in barns, rescued the apparently extinct Swabian “alb” lentil; nor in former chef Karlos Baca’s dedication to rehabilitating an almost wholly forgotten native American cuisine.
That said, it takes Saladino 450 pages (which is surely a good 100 pages too many) to explain why the Mammels and Bacas of this world are needed so desperately to save a food system that, far from beaking down, is feeding more and more food to more and more people.

The thing is, this system rests on two foundations: nitrogen fertiliser, and monocropping. The technology by which we fix nitrogen from the air by an industrial process is sustainable enough, or can be made so. Monocropping, on the other hand, was a dangerous strategy from the start.

In the 1910s and 1920s the Soviet agronomist Nikolai Vavilov championed the worldwide uptake of productive strains, with every plant a clone of its neighbour. How else, but by monocropping, do you feed the world? By the 1930s though, he was assembling the world’s first seed banks in a desperate effort to save the genetic diversity of our crops — species that monocropping was otherwise driving to extinction.

Preserving heritage strains matters. They were bred over thousands of years to resist all manner of local environmental pressures, from drought to deluge to disease. Letting them die out is the genetic equivalent of burning the library at Alexandria.

But seed banks can’t hold everything (there is, as Saladino remarks, no Svalbard seed vault for chickens) and are anyway a desperate measure. Saladino’s tale of how, come the Allied invasion, the holdings of Iraq’s national seed bank at Abu Ghraib was bundled off to Tel Hadya in Syria, only then to be frantically transferred to Lebanon, itself an increasingly unstable state, sounds a lot more more Blade Runner 2049 then Agronomy 101.

Better to create a food system that, while not necessarily promoting rare foods (fancy some Faroese air-fermented sheep meat? — thought not) will at least not drive such foods to extinction.

The argument is a little bit woolly here, as what the Faroe islanders get up to with their sheep is unlikely to have global consequences for the world’s food supply. Letting a crucial drought-resistant strain of wheat go extinct in a forgotten corner of Afghanistan, on the other hand, could have unimaginably dire consequences for us in the future.
Saladino’s grail is a food system with enough diversity in it to adapt to environmental change and withstand the onslaught of disease.

Is such a future attainable? Only to a point. Some wild foods are done for already because the high prices they command incentivize their destruction. If you want some of Baca’s prized and pungent bear root, native to a corner of Colorado, you’d better buy it now (but please, please don’t).

Rare cultivated foods stand a better chance. The British Middle White pig is rarer than the Himalayan snow leopard, says Saladino, but the stocks are sustainable enough that it is now being bred for the table.

Attempting to encompass the Sixth Extinction on the one hand, and the antics of slow-foodies like Mammel and Baca on the other is a recipe for cognitive dissonance. In the end, though, Saladino succeeds in mapping the enormity of what human appetite has done to the planet.

Saladino says we need to preserve rare and forgotten foods, partly because they are part of our cultural heritage, but also, and more hard-headedly, so that we can study and understand them, crossing them with existing lines to shore up and enrich our dangerously over-simplified food system. He’s nostalgic for our lost food past (and who doesn’t miss apples that taste of apples?) but he doesn’t expect us to delete Deliveroo and spend our time grubbing around for roots and berries.

Unless of course it’s all to late. It would not take many wheat blights or avian flu outbreaks before slow food is all that’s left to eat.

 

The old heave-ho

The Story of Work: A New History of Humankind by Jan Lucassen, reviewed for the Telegraph 14 August 2021

“How,” asks Dutch social historian Jan Lucassen, “could people accept that the work of one person was rewarded less than that of another, that one might even be able to force the other to do certain work?”

The Story of Work is just that: a history of work (paid or otherwise, ritual or for a wage, in the home or out of it) from peasant farming in the first agrarian societies to gig-work in the post-Covid ruins of the high street, and spanning the historical experiences of working people on all five inhabited continents. The writing is, on the whole, much better than the sentence you just read, but no less exhausting. At worst, it put me in mind of the work of English social historian David Kynaston; super-precise prose stitched together to create an unreadably compacted narrative.

For all its abstractions, contractions and signposting, however, The Story of Work is full of colour, surprise and human warmth. What other social history do you know writes off the Industrial Revolution as a net loss to music? “Just think of the noise from rattling machines that made it impossible to talk,” Lucassen writes, “in contrast to small workplaces or among larger troupes of workers who mollified work in the open air by singing shanties and other work songs.”

For 98 per cent of our species’ history we lived lives of reciprocal altruism in hunting-and-gathering clan groups. With the advent of farming and the formation of the first towns came surpluses and, for the first time, the feasibility of distributing resources unequally.

At first, conspicuous generosity ameliorated the unfairnesses. As the sixteenth-century French judge Étienne de la Boétie wrote: “theatres, games, plays, spectacles, marvellous beasts, medals, tableaux, and other such drugs were for the people of antiquity the allurements of serfdom, the price of their freedom, the tools of tyranny.” (The Story of Work is full of riches of this sort: strip off the narrative, and there’s a cracking miscellany still to enjoy.)

Lucassen diverges from the popular narrative (in which the invention of agriculture is the fount of all our ills) on several points. First, agricultural societies do not inevitably become marketplaces. Bantu-speaking agriculturalists spread across central, eastern and southern Africa between 3500 BCE and 500 CE, while maintaining perfect equality. “Agriculture and egalitarianism are compatible,“ says Lucassen.

It’s not the crops, but the livestock, that are to blame for our expulsion from hunter-gatherer Eden. If notions of private property had to arise anywhere, they surely arose, Lucassen argues, among those innocent-looking shepherds and shepherdesses, whose waterholes may have been held in common but whose livestock most certainly were not. Animals were owned by individuals or households, whose success depended on them knowing every single individual in their herd.

Having dispatched the idea that agriculture made markets, Lucassen then demolishes the idea that markets made inequality. Inequality came first. It does not take much specialism to arise within a group before some acquire more resources than others. Managing this inequality doesn’t need anything so complex as a market. All it needs is an agreement. Lucassen turns to India, and the social ideologies that gave rise, from about 600 BC, to the Upanishads and the later commentaries on the Vedas: the evolving caste system, he says, is a textbook example of how human suffering can be explained to an entire culture’s satisfaction ”without victims or perpetrators being able to or needing to change anything about the situation”.

Markets, by this light, become a way of subverting the iniquitous rhetorics cooked up by rulers and their priests. Why, then, have markets not ushered in a post-political Utopia? The problem is not to do with power. It’s to do with knowledge. Jobs used to be *hard*. They used to be intellectually demanding. Never mind the seven-year apprenticeships of Medieval Europe, what about the jobs a few are still alive to remember? Everything, from chipping slate out of a Welsh quarry to unloading a cargo boat while maintaining its trim, took what seem now to be unfeasible amounts of concentration, experience and skill.

Now, though — and even as they are getting fed rather more, and rather more fairly, than at any other time in world history — the global proletariat are being starved, by automation, of the meaning of their labour. The bloodlessness of this future is not a subject Lucassen spends a great many words on, but it informs his central and abiding worry, which is that slavery — a depressing constant in his deep history of labour — remains a constant threat and a strong future possibility. The logics of a slave economy run frighteningly close to the skin in many cultures: witness the wrinkle in the 13th Amendment of the US constitution that legalises the indentured servitude of (largely black) convicts, or the profits generated for the global garment industry by interned Uighurs in China. Automation, and its ugly sister machine surveillance, seem only to encourage such experiments in carceral capitalism.

But if workers of the world are to unite, around what banner should they gather? Lucassen identifies only two forms of social agreement that have ever reconciled us to the unfair distribution of reward. One is redistributive theocracy. “Think of classical Egypt and the pre-Columbian civilizations,” he writes, “but also of an ‘ideal state’ like the Soviet Union.”

The other is the welfare state. But while theocracies have been sustained for centuries or even millennia, the welfare state, thus far, has a shelf life of only a few decades, and is easily threatened.

Exhausted yet enlightened, any reader reaching the end of Lucassen’s marathon will understand that the problem of work runs far deeper than politics, and that the grail of a fair society will only come nearer if we pay attention to real experiences, and resist the lure of utopias.

Sod provenance

Is the digital revolution that Pixar began with Toy Story stifling art – or saving it? An article for the Telegraph, 24 July 2021

In 2011 the Westfield shopping mall in Stratford, East London, acquired a new public artwork: a digital waterfall by the Shoreditch-based Jason Bruges Studio. The liquid-crystal facets of the 12 metre high sculpture form a subtle semi-random flickering display, as though water were pouring down its sides. Depending on the shopper’s mood, this either slakes their visual appetite, or leaves them gasping for a glimpse of real rocks, real water, real life.

Over its ten-year life, Bruges’s piece has gone from being a comment about natural processes (so soothing, so various, so predictable!) to being a comment about digital images, a nagging reminder that underneath the apparent smoothness of our media lurks the jagged line and the stair-stepped edge, the grid, the square: the pixel, in other words.

We suspect that the digital world is grainier than the real, coarser, more constricted, and stubbornly rectilinear. But this is a prejudice, and one that’s neatly punctured by a new book by electrical engineer and Pixar co-founder Alvy Ray Smith, “A Biography of the Pixel”. This eccentric work traces the intellectual genealogy of Toy Story (Pixar’s first feature-length computer animation in 1995) over bump-maps and around occlusions, along traced rays and through endless samples, computations and transformations, back to the mathematics of the eighteenth century.

Smith’s whig history is a little hard to take — as though, say, Joseph Fourier’s efforts in 1822 to visualise how heat passed through solids were merely a way-station on the way to Buzz Lightyear’s calamitous launch from the banister rail — but it’s a superb short-hand in which to explain the science.

We can use Fourier’s mathematics to record an image as a series of waves. (Visual patterns, patterns of light and shade and movement, “can be represented by the voltage patterns in a machine,” Smith explains.) And we can recreate these waves, and the image they represent, with perfect fidelity, so long as we have a record of the points at the crests and troughs of each wave.

The locations of these high- and low-points, recorded as numerical coordinates, are pixels. (The little dots you see if you stare far too closely at your computer screen are not pixels; strictly speaking, they’re “display elements”.)

Digital media do not cut up the world into little squares. (Only crappy screens do that). They don’t paint by numbers. On the contrary, they faithfully mimic patterns in the real world.

This leads Smith to his wonderfully upside-down-sounding catch-line: “Reality,” he says, ”is just a convenient measure of complexity.”

Once pixels are converted to images on a screen, they can be used to create any world, rooted in any geometry, and obeying any physics. And yet these possibilities remain largely unexplored. Almost every computer animation is shot through a fictitious “camera lens”, faithfully recording a Euclidean landscape. Why are digital animations so conservative?

I think this is the wrong question: its assumptions are faulty. I think the ability to ape reality at such high fidelity creates compelling and radical possibilities of its own.

I discussed some of these possibilities with Paul Franklin, co-founder of the SFX company DNEG, and who won Oscars for his work on Christopher Nolan’s sci-fi blockbusters Interstellar (2014) and Inception (2010). Franklin says the digital technologies appearing on film sets in the past decade — from lighter cameras and cooler lights to 3-D printed props and LED front-projection screens — are positively disrupting the way films are made. They are making film sets creative spaces once again, and giving the director and camera crew more opportunities for on-the-fly creative decision making. “We used a front-projection screen on the film Interstellar, so the actors could see what visual effects they were supposed to be responding to,” he remembers. “The actors loved being able to see the super-massive black hole they were supposed to be hurtling towards. Then we realised that we could capture an image of the rotating black hole’s disc reflecting in Matthew McConaughey’s helmet: now that’s not the sort of shot you plan.”

Now those projection screens are interactive. Franklin explains: “Say I’m looking down a big corridor. As I move the camera across the screen, instead of it flattening off and giving away the fact that it’s actually just a scenic backing, the corridor moves with the correct perspective, creating the illusion of a huge volume of space beyond the screen itself.“

Effects can be added to a shot in real-time, and in full view of cast and crew. More to the point, what the director sees through their viewfinder is what the audience gets. This encourages the sort of disciplined and creative filmmaking Melies and Chaplin would recognise, and spells an end to the deplorable industry habit of kicking important creative decisions into the long grass of post-production.

What’s taking shape here isn’t a “good enough for TV” reality. This is a “good enough to reveal truths” reality. (Gargantua, the spinning black hole at Interstellar’s climax, was calculated and rendered so meticulously, it ended up in a paper for the journal Classical and Quantum Gravity.) In some settings, digital facsimile is becoming, literally, a replacement reality.

In 2012 the EU High Representative Baroness Ashton gave a physical facsimile of the burial chamber of Tutankhamun to the people of Egypt. The digital studio responsible for its creation, Factum Foundation, has been working in the Valley of the Kings since 2001, creating ever-more faithful copies of places that were never meant to be visited. They also print paintings (by Velasquez, by Murillo, by Raphael…) that are indistinguishable from the originals.

From the perspective of this burgeoning replacement reality, much that is currently considered radical in the art world appears no more than a frantic shoring-up of old ideas and exhausted values. A couple of days ago Damien Hirst launched The Currency, a physical set of dot paintings the digitally tokenised images of which can be purchased, traded, and exchanged for the real paintings.

Eventually the purchaser has to choose whether to retain the token, or trade it in for the physical picture. They can’t own both. This, says Hirst, is supposed to challenge the concept of value through money and art. Every participant is confronted with their perception of value, and how it influences their decision.

But hang on: doesn’t money already do this? Isn’t this what money actually is?

It can be no accident that non-fungible tokens (NFTs), which make bits of the internet ownable, have emerged even as the same digital technologies are actually erasing the value of provenance in the real world. There is nothing sillier, or more dated looking, than the Neues Museum’s scan of its iconic bust of Nefertiti, released free to the public after a complex three-year legal battle. It comes complete with a copyright license in the bottom of the bust itself — a copyright claim to the scan of a 3,000-year-old sculpture created 3,000 miles away.

Digital technologies will not destroy art, but they will erode and ultimately extinguish the value of an artwork’s physical provenance. Once facsimiles become indistinguishable from originals, then originals will be considered mere “first editions”.

Of course literature has thrived for many centuries in such an environment; why should the same environment damage art? That would happen only if art had somehow already been reduced to a mere vehicle for financial speculation. As if!

 

How many holes has a straw?

Reading Jordan Ellenberg’s Shape for the Telegraph, 7 July 2021

“One can’t help feeling that, in those opening years of the 1900s, something was in the air,” writes mathematician Jordan Ellenburg.

It’s page 90, and he’s launching into the second act of his dramatic, complex history of geometry (think “History of the World in 100 Shapes”, some of them very screwy indeed).
For page after reassuring page, we’ve been introduced to symmetry, to topology, and to the kinds of notation that make sense of knotty-sounding questions like “how many holes has a straw”?

Now, though, the gloves are off, as Ellenburg records the fin de siecle’s “painful recognition of some unavoidable bubbling randomness at the very bottom of things.”
Normally when sentiments of this sort are trotted out, they’re there to introduce readers to the wild world of quantum mechanics (and, incidentally, we can expect a lot of that sort of thing in the next few years: there’s a centenary looming). Quantum’s got such a grip on our imagination, we tend to forget that it was the johnny-come-lately icing on an already fairly indigestible cake.

A good twenty years before physical reality was shown to be unreliable at small scales, mathematicians were pretzeling our very ideas of space. They had no choice: at the Louisiana Purchase Exposition in 1904, Henri Poincarre, by then the world’s most famous geometer, described how he was trying to keep reality stuck together in light of Maxwell’s famous equations of electromagnetism (Maxwell’s work absolutely refused to play nicely with space). In that talk, he came startlingly close to gazumping Einstein to a theory of relativity.
Also at the same exposition was Sir Ronald Ross, who had discovered that malaria was carried by the bite of the anopheles mosquito. He baffled and disappointed many with his presentation of an entirely mathematical model of disease transmission — the one we use today to predict, well, just about everything, from pandemics to political elections.
It’s hard to imagine two mathematical talks less alike than those of Poincarre and Ross. And yet they had something vital in common: both shook their audiences out of mere three-dimensional thinking.

And thank goodness for it: Ellenburg takes time to explain just how restrictive Euclidean thinking is. For Euclid, the first geometer, living in the 4th century BC, everything was geometry. When he multiplied two numbers, he thought of the result as the area of a rectangle. When he multiplied three numbers, he called the result a “solid’. Euclid’s geometric imagination gave us number theory; but tying mathematical values to physical experience locked him out of more or less everything else. Multiplying four numbers? Now how are you supposed to imagine that in three-dimensional space?

For the longest time, geometry seemed exhausted: a mental gym; sometimes a branch of rhetoric. (There’s a reason Lincoln’s Gettysburg Address characterises the United States as “dedicated to the proposition that all men are created equal”. A proposition is a Euclidean term, meaning a fact that follows logically from self-evident axioms.)

The more dimensions you add, however, the more capable and surprising geometry becomes. And this, thanks to runaway advances in our calculating ability, is why geometry has become our go-to manner of explanation for, well, everything. For games, for example: and extrapolating from games, for the sorts of algorithmical processes we saddle with that profoundly unhelpful label “artificial intelligence” (“artificial alternatives to intelligence” would be better).

All game-playing machines (from the chess player on my phone to DeepMind’s AlphaGo) share the same ghost, the “Markov chain”, formulated by Andrei Markov to map the probabilistic landscape generated by sequences of likely choices. An atheist before the Russian revolution, and treated with predictable shoddiness after it, Markov used his eponymous chain, rhetorically, to strangle religiose notions of free will in their cradle.

From isosceles triangles to free will is quite a leap, and by now you will surely have gathered that Shape is anything but a straight story. That’s the thing about mathematics: it does not advance; it proliferates. It’s the intellectual equivalent of Stephen Leacock’s Lord Ronald, who “flung himself upon his horse and rode madly off in all directions”.

Containing multitudes as he must, Ellenberg’s eyes grow wider and wider, his prose more and more energetic, as he moves from what geometry means to what geometry does in the modern world.

I mean no complaint (quite the contrary, actually) when I say that, by about two-thirds the way in, Ellenberg comes to resemble his friend John Horton Conway. Of this game-playing, toy-building celebrity of the maths world, who died from COVID last year, Ellenburg writes, “He wasn’t being wilfully difficult; it was just the way his mind worked, more associative than deductive. You asked him something and he told you what your question reminded him of.”
This is why Ellenberg took the trouble to draw out a mind map at the start of his book. This and the index offer the interested reader (and who could possibly be left indifferent?) a whole new way (“more associative than deductive”) of re-reading the book. And believe me, you will want to. Writing with passion for a nonmathematical audience, Ellenberg is a popular educator at the top of his game.

Heading north

Reading Forecast by Joe Shute for the Telegraph, 28 June 2021

As a child, journalist Joe Shute came upon four Ladybird nature books from the early 1960s called What to Look For. They described “a world in perfect balance: weather, wildlife and people all living harmoniously as the seasons progress.”

Today, he writes, “the crisply defined seasons of my Ladybird series, neatly quartered like an apple, are these days a mush.”

Forecast is a book about phenology: the study of lifecycles, and how they are affected by season, location and other factors. Unlike behemothic “climate science”, phenology doesn’t issue big data sets or barnstorming visualisations. Its subject cannot be so easily metricised. How life responds to changes in the seasons, and changes in those changes, and changes in the rates of those changes, is a multidimensional study whose richness would be entirely lost if abstracted. Instead, phenology depends on countless parochial diaries describing changes on small patches of land.

Shute, who for more than a decade has used his own diary to fuel the “Weather Watch” column in the Daily Telegraph, can look back and see “where the weather is doing strange things and nature veering spectacularly off course.” Watching his garden coming prematurely to life in late winter, Shute is left “with a slightly sickly sensation… I started to sense not a seasonal cycle, but a spiral.” (130)

Take Shute’s diary together with countless others and tabulate the findings, and you will find that all life has started shifting northwards — insects at a rate of five metres a day, some dragonflies at between 17 and 28 metres a day.

How to write about this great migration? Immediately following several affecting and quite horrifying eye-witness scenes from the global refugee crisis, Shute writes: “The same climate crisis that is rendering swathes of the earth increasingly inhospitable and driving so many young people to their deaths, is causing a similar decline in migratory bird populations.”

I’m being unkind to make a point (in context the passage isn’t nearly so wince-making), but Shute’s not the first to discover it’s impossible to speak across all scales of the climate crisis at once.

Amitav Ghosh’s 2016 The Great Derangement is canonical here. Ghosh explained in painful detail why the traditional novel can’t handle global warming. Here, Shute seems to be proving the same point for non-fiction — or at least, for non-fiction of the meditative sort.

Why doesn’t Shute reach for abstractions? Why doesn’t he reach for climate science, and for the latest IPCC report? Why doesn’t he bloviate?

No, Shute’s made of sterner stuff: he would rather go down with his corracle, stitching together a planet on fire (11 wildfires raging in the Arctic circle in July 2018), human catastrophe, bird armageddon, his and his partner’s fertility problems, and the snore of a sleeping dormouse, across just 250 pages.

And the result? Forecast is a triumph of the most unnerving sort. By the end it’s clearly not Shute’s book that’s coming unstuck: it’s us. Shute begins his book asking “what happens to centuries of folklore, identity and memory when the very thing they subsist on is changing, perhaps for good”, and the answer he arrives at is horrific: folklore, identity and memory just vanish. There is no reverse gear to this thing.

I was delighted (if that is quite the word) to see Shute nailing the creeping unease I’ve felt every morning since 2014. That was the year the Met Office decided to give storms code-names. The reduction of our once rich, allusive weather vocabulary to “weather bombs” and “thunder snow”, as though weather events were best captured in “the sort of martial language usually preserved for the defence of the realm” is Shute’s most telling measure of how much, in this emergency, we have lost of ourselves.

Tally-ho!

Reading Sentient by Jackie Higgins for the Times, 19 June 2021

In May 1971 a young man from Portsmouth, Ian Waterman, lost all sense of his body. He wasn’t just numb. A person has a sense of the position of their body in space. In Waterman, that sense fell away, mysteriously and permanently.

Waterman, now in his seventies, has learned to operate his body rather as the rest of us operate a car. He has executive control over his movements, but no very intimate sense of what his flesh is up to.

What must this be like?

In a late chapter of her epic account of how the senses make sense, and exhibiting the kind of left-field thinking that makes for great TV documentaries, writer-director-producer Jackie Higgins goes looking for answers among the octopuses.

The octopus’s brain, you see, has no fine control over its arms. They pretty much do their own thing. They do, though, respond to the occasional high-level executive order. “Tally-ho!” cries the brain, and the arms gallop off, the brain in no more (or less) control of its transport system than a girl on a pony at a gymkhana.

Is being Ian Waterman anything like being an octopus? Attempts to imagine our way into other animals’ experiences — or other people’s experience, for that matter — have for a long time fallen under the shadow of an essay written in 1974 by American philosopher Thomas Nagel.

“What Is It Like to Be a Bat?” wasn’t about bats so much as to do with consciousness (continuity of). I can, with enough tequila inside me) imagine what it would be like for me to be a bat. But that’s not the same as knowing what’s it’s like for a bat to be a bat.

Nagel’s lesson in gloomy solipsism is all very well in philosophy. Applied to natural history, though — where even a vague notion of what a bat feels like might help a naturalist towards a moment of insight — it merely sticks the perfect in the way of the good. Every sparky natural history writer cocks a snook at poor Nagel whenever the opportunity arises.

Advances in media technology over the last twenty years (including, for birds, tiny monitor-stuffed backpacks) have deluged us in fine-grained information about how animals behave. We now have a much better idea of what (and how) they feel.

Now, you can take this sort of thing only so far. The mantis shrimp (not a shrimp; a scampi) has up to sixteen kinds of narrow-band photoreceptor, each tuned to a different wavelength of light! Humans only have three. Does this mean that the mantis shrimp enjoys better colour vision than we do?

Nope. The mantis shrimp is blind to colour, in the human sense of the word, perceiving only wavelengths. The human brain meanwhile, by processing the relative intensities of those three wavelengths of colour vision, distinguishes between millions of colours. (Some women have four colour receptors, which is why you should never argue with a woman about which curtains match the sofa.)

What about the star-nosed mole, whose octopus-like head is a mass of feelers? (Relax: it’s otherwise quite cute, and only about 2cm long.) Its weird nose is sensitive: it gathers the same amount of information about what it touches, as a regular rodent’s eye gathers about what it sees. This makes the star-nosed mole the fastest hunter we know of, identifying and capturing prey (worms) in literally less than an eyeblink.

What can such a creature tell us about our own senses? A fair bit, actually. That nose is so sensitive, the mole’s visual cortex is used the process the information. It literally sees through its nose.

But that turns out not to be so very strange: Braille readers, for example, really do read through their fingertips, harnessing their visual cortex to the task. One veteran researcher, Paul Bach-y-Rita, has been building prosthetic eyes since the 1970s, using glorified pin-art machines to (literally) impress the visual world upon his volunteers’ backs, chests, even their tongues.

From touch to sound: in the course of learning about bats, I learned here that blind people have been using echolocation for years, especially when it rains (more auditory information, you see); researchers are only now getting a measure of their abilities.

How many senses are there that we might not have noticed? Over thirty, it seems, all served by dedicated receptors, and many of them elude our consciousness entirely. (We may even share the magnetic sense enjoyed by migrating birds! But don’t get too excited. Most mammals seem to have this sense. Your pet dog almost always pees with its head facing magnetic north.)

This embarrassment of riches leaves Higgins having to decide what to include and what to leave out. There’s a cracking chapter here on how animals sense time, and some exciting details about a sense of touch common to social mammals: one that responds specifically to cuddling.

On the other hand there’s very little about our extremely rare ability to smell what we eat while we eat it. This retronasal olfaction gives us a palate unrivalled in the animal kingdom, capable of discriminating between nearly two trillion savours: and ability which has all kinds of implications for memory and behaviour.

Is this a problem? Not at all. For all that it’s stuffed with entertaining oddities, Sentient is not a book about oddities, and Higgins’s argument, though colourful, is rigorous and focused. Over 400 exhilarating pages, she leads us to adopt an entirely unfamiliar way of thinking about the senses.

Because their mechanics are fascinating and to some degree reproduceable (the human eye is, mechanically speaking, very much like a camera) we grow up thinking of the senses as mechanical outputs.

Looking at our senses this way, however, is rather like studying fungi but only looking at the pretty fruiting bodies. The real magic of fungi is their networks. And the real magic of our senses is the more than 100 billion nerve cells in each human nervous system — greater, Higgins says, than the number of stars in the Milky Way.

And that vast complexity — adapting to reflect and organise the world, not just over evolutionary time but also over the course of an individual life — gives rise to all kinds of surprises. In some humans, the ability to see with sound. In vampire bats (who can sense the location of individual veins to sink their little fangs into), the ability to detect heat using receptors that in most other mammals are used to detect acute pain.

In De Anima, the ancient philosopher Aristotle really let the side down in listing just five senses. No one expects him to have spotted exotica like cuddlesomeness and where-to-face-when-you-pee. But what about pain? What about balance? What about proprioception?

Aristotle’s restrictive and mechanistic list left him, and generations after him, with little purchase on the subject. Insights have been hard to come by.

Aristotle himself took one look at the octopus and declared it stupid.

Let’s see him driving a car with eight legs.

We may never have a pandemic again

Reading The Code Breaker, Walter Isaacson’s biography of Jennifer Doudna, for the Telegraph, 27 March 2021

In a co-written account of her work published in 2017, biochemist Jennifer Doudna creates a system that can cut and paste genetic information as simply as a word processor can manipulate text. Having conceived a technology that promises to predict, correct and even enhance a person’s genetic destiny she says, not without cause, “I began to feel a bit like Doctor Frankenstein.”

When it comes to breakthroughs in biology, references to Mary Shelley are irresistible. One of Walter Isaacson’s minor triumphs, in a book not short of major triumphs, is that, over 500 pages, he mentions that over-quoted, under-read novel less than half a dozen times. In biotechnology circles, this is probably a record.

We explain science by telling stories of discovery. It’s a way of unpacking complicated ideas in narrative form. It’s not really history, or if it is, it’s whig history, defined by a young Herbert Butterfield in 1931 as “the tendency… to praise revolutions provided they have been successful, to emphasise certain principles of progress in the past and to produce a story which is the ratification if not the glorification of the present.”

To explain the science, you falsify the history.
So all discovers and inventors are heroes on the Promethean (or Frankensteinian) model, working in isolation, and taking on the whole weight of the world on their shoulders!

Alas, the reverse is also true. Telling the true history of discovery makes the science very difficult to unpack. And though Walter Isaacson, whose many achievements include a spell as CEO of the Aspen Institute, clearly knows his science, his account of the most significant biological breakthrough since understanding the structure of DNA is not the very best account of CRISPR out there. His folksy cajoling — inviting us to celebrate “wily bacteria” and the “plucky little molecule” RNA — suggests exasperation. Explaining CRISPR is *hard*.

The Code Breaker excels precisely where, having read Isaacson’s 2011 biography of Steve Jobs, you might expect it to excel. Isaacson understands that all institutions are political. Every institutional activity — be it blue-sky research into the genome, or the design of a consumer product — is a species of political action.

The politics of science is uniquely challenging, because its standards of honesty, precision and rigour stretch the capabilities of language itself. Again and again, Doudna’s relationships with rivals, colleagues, mentors and critics are seen to hang on fine threads of contested interpretation. We see that Doudna’s fiercest rivalry, with Feng Zhang of the Broad Institute of MIT and Harvard, was conducted in an entirely ethical manner — and yet we see both of them stumbling away, bloodied.

Isaacson’s style of biography — already evident in his appreciations of Einstein and Franklin and Leonardo — can be dubbed “qualified hagiography”. He’s trying to hit a balance between the kind of whig history that will make complex materials accessible, and the kind of account that will stand the inspection of academic historians. His heroes’ flaws are explored, but their heroism is upheld. It’s a structural device, and pick at it however you want, it makes for a rattlingly good story.

Jennifer Doudna was born in 1964 and grew up on Big Island, Hawaii. Inspired by an old paperback copy of The Double Helix by DNA pioneer James Watson, she devoted her life to understanding the chemistry of living things. Over her career she championed DNA’s smaller, more active cousin RNA, which brought to her notice a remarkable mechanism, developed by single-celled organisms in their 3.1-million-year war with viruses. Each of these cells used RNA to build their very own immune system.

Understanding that mechanism was Doudna’s triumph, shared with her colleague Emmanuelle Charpentier; both conspicuously deserved the Nobel prize awarded them last year.

Showing that this mechanism worked in cells like our own, though, would change everything, including our species’ relationship with its own evolution. This technology has the power to eradicate both disease (good) and ordinary human variety (really not so good at all).

In 2012, the year of the great race, Doudna’s Berkeley lab knew nothing like enough about working with human cells. Zhang’s lab knew nothing like enough about the biochemical wrinkles that drove CRISPR. Their rivalrous decision not to pool CRISPR-Cas9 intellectual property would pave the way for an epic patent battle.

COVID-19 has changed all that, ushering in an extraordinary cultural shift.. Led by Doudna and Zhang, last year most academic labs declared that their discoveries would be made available to anyone fighting the virus. New on-line forums have blossomed, breaking the stranglehold of expensive paywall-protected journals.

Doudna’s lab and others have developed home testing kits for COVID-19 that have a potential impact beyond this one fight, “bringing biology into the home,” as Isaacson writes, “the way that personal computers in the 1970s brought digital products and services… into people’s daily lives and consciousness.”

Meanwhile genetic vaccines powered by CRISPR — like the ones developed for COVID-19 by Moderna and BioNTech/Pfizer — portend a sudden shift of the evolutionary balance between human beings and viruses. Moderna’s chair Noubar Afeyan is punchy about the prospects: “We may never have a pandemic again,” he says.

The Code Breaker catches us at an extraordinary moment. Isaacson argues with sincerity and conviction that, blooded by this pandemic, we should now grasp the nettle, make a stab at the hard ethical questions, and apply Doudna’s Promethean knowledge, now, and everywhere, to help people. Given the growing likelihood of pandemics, we may not have a choice.

 

The seeds of indisposition

Reading Ageless by Andrew Steele for the Telegraph, 20 December 2020

The first successful blood transfusions were performed in 1650, by the English physician Richard Lower, on dogs. The idea, for some while, was not that transfusions would save lives, but that they might extend them.

Turns out they did. The Philosophical Transactions of the Royal Society mentions an experiment in which “an old mongrel curr, all over-run with the mainge” was transfused with about fifteen ounces of of blood from a young spaniel and was “perfectly cured.”

Aleksandr Bogdanov, who once vied with Vladimir Lenin for control of the Bolsheviks (before retiring to write science fiction novels) brought blood transfusion to Russia, and hoped to rejuvenate various exhausted colleagues (including Stalin) by the method. On 24 March 1928 he mutually transfused blood with a 21-year-old student, suffered a massive transfusion reaction, and died, two weeks later, at the age of fifty-four.

Bogdanov’s theory was stronger than his practice. His essay on ageing speaks a lot of sense. “Partial methods against it are only palliative,” he wrote, “they merely address individual symptoms, but do not help fight the underlying illness itself.” For Bogdanov, ageing is an illness — unavoidable, universal, but no more “normal” or “natural” than any other illness. By that logic, ageing should be no less invulnerable to human ingenuity and science. It should, in theory, be curable.

Andrew Steele agrees. Steele is an Oxford physicist who switched to computational biology, drawn by the field of biogerontology — or the search for a cure for ageing. “Treating ageing itself rather than individual diseases would be transformative,” he writes, and the data he brings to this argument is quite shocking. It turns out that curing cancer would add less than three years to a person’s typical life expectancy, and curing heart disease, barely two, as there are plenty of other diseases waiting in the wings.

Is ageing, then, simply a statistical inevitability — a case of there always being something out there that’s going to get us?

Well, no. In 1825 Benjamin Gompertz, a British mathematician, explained that there are two distinct drivers of human mortality. There are extrinsic events, such as injuries or diseases. But there’s also an internal deterioration — what he called “the seeds of indisposition”.

It’s Steele’s job here to explain why we should treat those “seeds” as a disease, rather than a divinely determined limit. In the course of that explanation Steele gives us, in effect, a tour of the whole of human biology. It’s an exhilarating journey, but by no means always a pretty one: a tale of senescent cells, misfolded proteins, intracellular waste and reactive metals. Readers of advanced years, wondering why their skin is turning yellow, will learn much more here than they bargained for.

Ageing isn’t evolutionarily useful; but because it comes after our breeding period, evolution just hasn’t got the power to do anything about it. Mutations whose negative effects occur late in our lives accumulate in the gene pool. Worse, if they had a positive effect on our lives early on, then they will be actively selected for. Ageing, in other words, is something we inherit.

It’s all very well conceptualising old age as one disease. But if your disease amounts to “what happens to a human body when 525 million years of evolution stop working”, then you’re reduced to curing everything that can possibly go wrong, with every system, at once. Ageing, it turns out, is just thousands upon thousands of “individual symptoms”, arriving all at once.

Steele believes the more we know about human biology, the more likely it is we’ll find systemic ways to treat these multiple symptoms. The challenge is huge, but the advances, as Steele describes them, are real and rapid. If, for example, we can persuade senescent cells to die, then we can shed the toxic biochemical garbage they accumulate, and enjoy once more all the benefits of (among other things) young blood. This no fond hope: human trials of senolytics started in 2018.

Steele is a superb guide to the wilder fringes of real medicine. He pretends to nothing else, and nothing more. So whether you find Ageless an incredibly focused account, or just an incredibly narrow one, will come down, in the end, to personal taste.

Steele shows us what happens to us biologically as we get older — which of course leaves a lot of blank canvas for the thoughtful reader to fill. Steele’s forebears in this (frankly, not too edifying) genre have all to often claimed that there are no other issues to tackle. In the 1930s the surgeon Alexis Carrel declared that “Scientific civilization has destroyed the world of the soul… Only the strength of youth gives the power to satisfy physiological appetites and to conquer the outer world”.

Charming.

And he wasn’t the only one. Books like Successful Aging (Rowe & Kahn, 1998) and How and Why We Age (Hayflick, 1996) aspire to a sort of overweaning authority, not by answering hard questions about mortality, long life and ageing, but merely by denying a gerontological role for anyone outside their narrow specialism: philosophers, historians, theologians, ethicists, poets — all are shown the door.

Steele is much more sensible. He simply sticks to his subject. To the extent that he expresses a view, I am confident that he understands that ageing is an experience to be lived meaningfully and fully, as well as a fascinating medical problem to be solved.

Steele’s vision is very tightly controlled: he wants us to achieve “negligible senescence”, in which, as we grow older, we suffer no obvious impairments. What he’s after is a risk of death that stays constant no matter how old we get. This sounds fanciful, but it does happen in nature. Giant tortoises succumb to statistical inevitability, not decrepitude.

I have a fairly entrenched problem with books that treat ageing as a merely medical phenomenon. But I heartily recommend this one. It’s modest in scope, and generous in detail. It’s an honest and optimistic contribution to a field that tips very easily indeed into Tony Stark-style boosterism.

Life expectancy in the developed world has doubled from 40 in the 1800s to over 80 today. But it is in our nature to be always craving for more. One colourful outfit called Ambrosia is offering anyone over 35 the opportunity to receive a litre of youthful blood plasma for $8000. Steele has some fun with this: “At the time of writing,” he tells us, “a promotional offer also allows you to get two for $12000 — buy one, get one half-price.”

A fanciful belonging

Reading The Official History of Britain: Our story in numbers as told by the Office for National Statistics by Boris Starling with David Bradbury for The Telegraph, 18 October 2020

Next year’s national census may be our last. Opinions are being sought as to whether it makes sense, any longer, for the nation to keep taking its own temperature every ten years. Discussions will begin in 2023. Our betters may conclude that the whole rigmarole is outdated, and that its findings can be gleaned more cheaply and efficiently by other methods.

How the UK’s national census was established, what it achieved, and what it will mean if it’s abandoned, is the subject of The Official History of Britain — a grand title for what is, to be honest, a rather messy book, its facts and figures slathered in weak and irrelevant humour, most of it to do with football, I suppose as an intellectual sugar lump for the proles.

Such condescension is archetypally British; and so too is the gimcrack team assembled to write this book. There is something irresistibly Dad’s Army about the image of David Bradbury, an old hand at the Office of National Statistics, comparing dad jokes with novelist Boris Starling, creator of Messiah’s DCI Red Metcalfe, who was played on the telly by Ken Stott.

The charm of the whole enterprise is undeniable. Within these pages you will discover, among other tidbits, the difference between critters and spraggers, whitsters and oliver men. Such were the occupations introduced into the Standard Classification of 1881. (Recent additions include YouTuber and dog sitter.) Nostalgia and melancholy come to the fore when the authors say a fond farewell to John and Margaret — names, deeply unfashionable now, that were pretty much compulsory for babies born between 1914 and 1964. But there’s rigour, too; I recommend the author’s highly illuminating analysis of today’s gender pay gap.

Sometimes the authors show us up for the grumpy tabloid zombies we really are. Apparently a sizeable sample of us, quizzed in 2014, opined that 15 per cent of all our girls under sixteen were pregnant. The lack of mathematical nous here is as disheartening as the misanthropy. The actual figure was a still worryingly high 0.5 per cent, or one in 200 girls. A 10-year Teenage Pregnancy Strategy was created to tackle the problem, and the figure for 2018 — 16.8 conceptions per 1000 women aged between 15 and 17 — is the lowest since records began.

This is why census records are important: they inform enlightened and effective government action. The statistician John Rickman said as much in a paper written in 1796, but his campaign for a national census only really caught on two years later, when the clergyman Thomas Malthus scared the living daylights out of everyone with his “Essay on the Principle of Population”. Three years later, ministers rattled by Malthus’s catalogue of checks on the population of primitive societies — war, pestilence, famine, and the rest — peeked through their fingers at the runaway population numbers for 1801.

The population of England then was the same as the population of Greater London now. The population of Scotland was almost exactly the current population of metropolitan Glasgow.

Better to have called it “The Official History of Britons”. Chapter by chapter, the authors lead us (wisely, if not too well) from Birth, through School, into Work and thence down the maw of its near neighbour, Death, reflecting all the while on what a difference two hundred years have made to the character of each life stage.

The character of government has changed, too. Rickman wanted a census because he and his parliamentary colleagues had almost no useful data on the population they were supposed to serve. The job of the ONS now, the writers point out, “is to try to make sure that policymakers and citizens can know at least as much about their populations and economies as the internet behemoths.”

It’s true: a picture of the state of the nation taken every ten years just doesn’t provide the granularity that could be fetched, more cheaply and more efficiently, from other sources: “smaller surveys, Ordnance Survey data, GP registrations, driving licence details…”

But this too is true: near where I live there is a pedestrian crossing. There is a button I can push, to change the lights, to let me cross the road. I know that in daylight hours, the button is a dummy, that the lights are on a timer, set in some central office, to smooth the traffic flow. Still, I press that button. I like that button. I appreciate having my agency acknowledged, even in a notional, fanciful way.

Next year, 2021, I will tell the census who and what I am. It’s my duty as a citizen, and also my right, to answer how I will. If, in 2031, the state decides it does not need to ask me who I am, then my idea of myself as a citizen, notional as it is, fanciful as it is, will be impoverished.

Thoughts from a Camden bench

Reading The Lonely Century by Noreena Hertz for the Telegraph, 26 September 2020

Economist Noreena Hertz’s new book, about our increasing loneliness in a society geared to mutual exploitation, is explosive stuff. I would guess it was commissioned a while ago, then retrofitted for the Covid-19 pandemic — though I defy you to find any unsightly welding marks. Hertz is too good a writer for that, and her idea too timely, too urgent, and too closely argued to be upstaged by a mere crisis.

Loneliness is bad for our physical and mental health. Lack of social interaction makes us both labile and aggressive, a sucker for any Dionysian movement that offers us even a shred of belonging. These lines of argument precede Covid-19 by years, and have been used to explain everything from changing dating patterns among the young to Donald Trump’s win in 2016. But, goodness, what a highly salted dish they make today, now that we’re consigned to our homes and free association is curbed by law.

Under lockdown, we’re now even more reliant on the internet to maintain our working and personal identities. Here again, our immediate experiences sharpen Hertz’s carefully thought-out arguments. Social media is making us unhappy. It’s eroding our civility. It’s driving up rates of suicide. And so on — the arguments here are well-rehearsed, though Hertz’s synthesis is certainly compelling. “Connecting the world may be their goal,” she writes, thinking of the likes of Instagram and Tik Tok, “but it seems that if in the process connections become shallower, crueller or increasingly distorted, so be it.”

Now that we spend so much time indoors, we’re becoming ever more aware of how our outdoor civic space has been dismantled. And here is Hertz once again, waiting for us outside the shuttered wreck of an abandoned library. Actually libraries are the least of it: these days we don’t even get to exchange a friendly word with the supermarket staff who used to check out our shopping. And heaven help us if, on our way back to house arrest, we try to take the weight off our feet on one of those notorious municipal “Camden benches” — tiered and slanted blocks of concrete designed, not to allow public rest of any sort, but to keep “loiterers” moving.

Having led us, Virgil-like, through the circles of virtual and IRL Hell, Hertz ushers us into the purgatory she calls The Loneliness Economy (the capital letters are hers), and describes some of the ways the private sector has tried to address our cultural atomisation. Consider, for example, the way WeWork’s designers built the corridors and stairways in their co-working spaces deliberately narrow, so that users would have to make eye contact with each other. Such strategies are a bit “nudgy” for my taste, but sooner them than that bloody Camden bench.

The trouble with commercialising solutions for loneliness should be obvious: the lonelier we are, the more money this sector will make. So the underlying social drivers of loneliness will be ignored, and only the symptoms will be addressed. “They want to sell the benefit of working in close proximity with others, but with none of the social buy-in, the *hard work* that community requires.” Hertz writes, and wonders whether the people joining many of new, shiny, commercialised communities “have the time or lifestyles that community-building demands.”

Bringing such material to life necessarily means cherry-picking your examples. An impatient or hostile reader might take exception to the graduate student who spent so long curating her “Jen Goes Ziplining” Instagram post that she never actually went ziplining; also to the well-paid executive who lives out of his car and spends all his money on platonic cuddling services. The world is never short of foolish and broken people, and two swallows do not make a summer.

Still, I can’t see how Hertz’s account is harmed by such vividly rendered first-person research. And readers keen to see Hertz’s working have 130 pages of notes to work from.

More seriously, The Lonely Century suffers from the same limitation as Rutger Bregman’s recent (and excellent) Humankind, about how people are basically good. Let’s give each other a break, says Bregman. Let’s give each other the time of day, says Hertz. But neither author (and it’s not for want of trying) can muster much evidence to suggest the world is turning in the direction they would favour. Bregman bangs on about a school that’s had its funding cut; Hertz about a community cafe that’s closed after running out of money. There are other stories, lots of them, but they are (as Hertz herself concedes at one point) a bit “happy-clappy”.

One of Hertz’s many pre-publication champions calls her book “inspiring”. All it inspired in me, I’m afraid, was terror. The world is full of socially deprogrammed zombies. The Technological Singularity is here, the robots are us, and our open plan office-bound Slack messages constitute, at a global level, some great vegetative Overmind.

Hertz doesn’t go this far. But I do. Subtitled “coming together in a world that’s pulling apart”, The Lonely Century confirmed me in my internal exile, and had me bolting to door to the world even more firmly. I don’t even care that I am the problem Hertz is working so desperately hard to fix. I’m not coming out until it’s all over.