Who’s left in the glen?

Watching Emily Munro’s Living Proof: A climate story for New Scientist, 6 October 2021

Most environmental documentaries concentrate, on the environment. Most films about the climate crisis focus on people who are addressing the crisis.

Assembled and edited by Emily Munro, a curator of the moving image at the National Library of Scotland, Living Proof is different. It’s a film about demobbed soldiers and gamekeepers, architects and miners and American ex-pats. It’s about working people and their employers, about people whose day-to-day actions have contributed to the industrialisation of Scotland, its export of materials and methods (particularly in the field of off-shore oil and gas), and its not coincidental environmental footprint.

Only towards the end of Munro’s film do we meet protestors of any sort. They’re deploring the construction of a nuclear power plant at Torness, 33 miles east of Edinburgh. Even here, Munro is less interested in the protest itself, than in one impassioned, closely argued speech which, in the context of the film, completes an argument begun in Munro first reel (via a public information film from the mid-1940s) about the country’s political economy.

Assembled from propaganda and public information films, promotional videos and industrial reports, Living Proof is an archival history of what Scotland has told itself about itself, and how those stories, ambitions and visions have shaped the landscape, and effected the global environment.

Munro is in thrall to the changing Scottish industrial landscape, from its herring fisheries to its dams, from its slums and derelict mine-heads to the high modernism of its motorways and strip mills. Her vision is compelling and seductive. Living Proof is also — and this is more important — a film which respects its subjects’ changing aspirations. It tells the story of a poor, relatively undeveloped nation waking up to itself and trying to do right by its people.

It will come as no surprise, as Glasgow prepares to host the COP26 global climate conference, to hear that the consequences of those efforts have been anything but an unalloyed good. Powered by offshore oil and gas, home to Polaris nuclear missiles, and a redundancy-haunted grave for a dozen heavy industries (from coal-mining to ship-building to steel manufacture), Scotland is no-one’s idea of a green nation.

As Munro’s film shows, however, the environment was always a central plank of whatever argument campaigners, governments and developers made at the time. The idea that the Scots (and the rest of us) have only now “woken up to the environment” is a pernicious nonsense.

It’s simply that our idea of the environment has evolved.

In the 1940s, the spread of bog water, as the Highlands depopulated, was considered a looming environmental disaster, taking good land out of use. In the 1950s automation promised to pull working people out of poverty, disease and pollution. In the 1960s rapid communications were to serve an industrial culture that would tread ever more lightly over the mine-ravaged earth.

It’s with the advent of nuclear power, and that powerful speech on the beach at Torness, that the chickens come home to roost. That new nuclear plant is only going to employ around 500 people! What will happen to the region then?

This, of course, is where we came in: to a vision of a nation that, if cannot afford its own people, will go to rack and ruin, with (to quote that 1943 information film) “only the old people and a few children left in the glen”.

Living Proof critiques an economic system that, whatever its promises, can cannot help but denude the earth of its resources, and pauperise its people. It’s all the more powerful for being articulated through real things: schools and roads and pharmaceuticals, earth movers and oil rigs, washing machines and gas boilers.

Reasonable aspirations have done unreasonable harm to the planet. That’s the real crisis elucidated by Living Proof. It’s a point too easily lost in all the shouting. And it’s rarely been made so well.

“Grotesque, awkward, and disagreeable”

Reading Stanislaw Lem’s Dialogues for the Times, 5 October 2021

Some writers follow you through life. Some writers follow you beyond the grave. I was seven when Andrei Tarkovsky filmed Lem’s satirical sci-fi novel Solaris, thirty seven when Steven Soderbergh’s very different (and hugely underrated) Solaris came out, forty when Lem died. Since then, a whole other Stanslaw Lem has arisen, reflected in philosophical work that, while widely available elsewhere, had to wait half a century or more for an English translation. In life I have nursed many regrets: that I didn’t learn Polish is not the least of them.

The point about Lem is that he writes about the future, predicting the way humanity’s inveterate tinkering will enable, pervert and frustrate its ordinary wants and desires. This isn’t “the future of technology” or “the future of the western world” or “the future of the environment”. It’s neither “the future as the author would like it to be”, nor “the future if the present moment outstayed its welcome”. Lem knows a frightening amount of science, and even more about technology, but what really matters is what he knows about people. His writing is not just surprisingly prescient; it’s timeless.

Dialogues is about cybernetics, the science of systems. A system is any material arrangement that responds to environmental feedback. A steam engine is a mere mechanism, until you add the governor that controls its internal pressure. Then it becomes a system. When Lem was writing, systems thinking was meant to transform everything, conciliating between the physical sciences and the humanities to usher in a technocratic Utopia.

Enthusiastic as 1957-vintage Lem was, there is something deliciously levelling about how he introduces the cybernetic idea. We can bloviate all we like about using data and algorithms to create a better society; what drives Philonous and Hylas’s interest in these eight dialogues (modelled on Berkeley’s Three Dialogues of 1713) is Hylas’s desperate desire to elude Death. This new-fangled science of systems reimagines the world as information, and the thing about information is that it can be transmitted, stored and (best of all) copied. Why then can’t it transmit, store and copy poor Death-haunted Hylas?

Well, of course, that’s certainly do-able, Philonous agrees — though Hylas might find cybernetic immortality “grotesque, awkward, and disagreeable”. Sure enough, Hylas baulks at Philomous’s culminating vision of humanity immortalised in serried ranks of humming metal cabinets.

This image certainly was prescient: Cybernetics was supposed to be a philosophy, one that would profoundly change our understanding of the animate and inanimate world. The philosophy failed to catch on, but its insights created something utterly unexpected: the computer.

Dialogues is important now because it describes (or described, rather, more than half a century ago — you can almost hear Lem’s slow hand-clapping from the Beyond) all the ways we do not comprehend the world we have made.

Cybernetics teaches us that systems are animate. It doesn’t matter what a system is made from. Workers in an office, onse and zeroes clouding a chip, proteins folding and refolding in a living cell, string and pulleys in a playground: are all good building materials for systems, and once a system is up and running, it is no longer reducible to its parts. It’s a distinct, unified whole, shaped by its past history and actively coexisting with its environment, and exhibiting behavior that cannot be precisely predicted from its structure. “If you insist on calling this new system a mechanism,” Lem remarks, drily, “then you must apply that term to living beings as well.”

We’ve yet to grasp this nettle: that between the living and non-living worlds sits a world of systems, unalive yet animate. No wonder, lacking this insight, we spend half our lives sneering at the mechanisms we do understand (“Alexa, stop calling my Mum!”) and the other half on our knees, worshipping the mechanisms we don’t. (“It says here on Facebook…”) The very words we use — “artificial intelligence” indeed! — reveal the paucity of our understanding.

“Lem understood, as no-one then or since has understood, how undeserving of worship are the systems (be they military, industrial or social) that are already strong enough to determine our fate. A couple of years ago, around the time Hong Kong protesters were destroying facial recognition towers, a London pedestrian was fined £90 for hiding his face from an experimental Met camera. The consumer credit reporting company Experian uses machine learning to decide the financial trustworthiness of over a billion people. China’s Social Credit System (actually the least digitised of China’s surveillance systems) operates under multiple, often contradictory legal codes.

The point about Lem is not that he was terrifyingly smart (though he was that); it’s that he had skin in the game. He was largely self-taught, because he had to quit university after writing satirical pieces about Soviet poster-boy Trofim Lysenko (who denied the existence of genes). Before that, he was dodging Nazis in Lv’v (and mending their staff cars so that they would break down). In his essay “Applied Cybernetics: An Example from Sociology”, Lem uses the new-fangled science of systems to anatomise the Soviet thinking of his day, and from there, to explain how totalitarianism is conceived, spread and performed. Worth the price of the book in itself, this little essay is a tour de force of human sympathy and forensic fury, shorter than Solzhenitsyn, and much, much funnier than Hannah Arendt.

Peter Butko’s translations of the Dialogues, and the revisionist essays Lem added to the 1971 second edition, are as witty and playful as Lem’s allusive Polish prose demands. His endnotes are practically a book in themselves (and an entertaining one, too).

Translated so well, Lem needs no explanation, no contextualisation, no excuse-making. Lem’s expertise lay in technology, but his loyalty lay with people, in all their maddening tolerance for bad systems. “There is nothing easier than to create a state in which everyone claims to be completely satisfied,” he wrote; “being stretched on the bed, people would still insist — with sincerity — that their life is perfectly fine, and if there was any discomfort, the fault lay in their own bodies or in their nearest neighbor.”

 

If this is Wednesday then this must be Thai red curry with prawns

Reading Dan Saladino’s Eating to Extinction for the Telegraph, 26 September 2021

Within five minutes of my desk: an Italian delicatessen, a Vietnamese pho house, a pizzeria, two Chinese, a Thai, and an Indian “with a contemporary twist” (don’t knock it till you’ve tried it). Can such bounty be extended over the Earth?

Yes, it can. It’s already happening. And in what amounts to a distillation of a life’s work writing about food, and sporting a few predictable limitations (he’s a journalist; he puts stories in logical order, imagining this makes an argument) Dan Saladino’s Eating to Extinction explains just what price we’ll pay for this extraordinary achievement which promises, not only to end world hunger by 2030 (a much-touted UN goal), but to make California rolls available everywhere from to Kamchatka to Karachi.

The problem with my varied diet (if this is Wednesday then this must be Thai red curry with prawns) is that it’s also your varied diet, and your neighbour’s; it’s rapidly becoming the same varied diet across the whole world. You think your experience of world cuisine reflects global diversity? Humanity used to sustain itself (admittedly, not too well) on 6,000 species of plant. Now, for over three quarters of our calories, we gorge on just nine: rice, wheat and maize, potato, barley, palm oil and soy, sugar from beets and sugar from cane. The same narrowing can be found in our consumption of animals and seafood. What looks to us like the world on a plate is in fact the sum total of what’s available world-wide, now that we’ve learned to grow ever greater quantities of ever fewer foods.

Saladino is in the anecdote business; he travels the Earth to meet his pantheon of food heroes, each of whom is seen saving a rare food for our table – a red pea, a goaty cheese, a flat oyster. So far, so very Sunday supplement. Nor is there anything to snipe at in the adventures of, say, Woldemar Mammel who, searching in the attics of old farmhouses and in barns, rescued the apparently extinct Swabian “alb” lentil; nor in former chef Karlos Baca’s dedication to rehabilitating an almost wholly forgotten native American cuisine.
That said, it takes Saladino 450 pages (which is surely a good 100 pages too many) to explain why the Mammels and Bacas of this world are needed so desperately to save a food system that, far from beaking down, is feeding more and more food to more and more people.

The thing is, this system rests on two foundations: nitrogen fertiliser, and monocropping. The technology by which we fix nitrogen from the air by an industrial process is sustainable enough, or can be made so. Monocropping, on the other hand, was a dangerous strategy from the start.

In the 1910s and 1920s the Soviet agronomist Nikolai Vavilov championed the worldwide uptake of productive strains, with every plant a clone of its neighbour. How else, but by monocropping, do you feed the world? By the 1930s though, he was assembling the world’s first seed banks in a desperate effort to save the genetic diversity of our crops — species that monocropping was otherwise driving to extinction.

Preserving heritage strains matters. They were bred over thousands of years to resist all manner of local environmental pressures, from drought to deluge to disease. Letting them die out is the genetic equivalent of burning the library at Alexandria.

But seed banks can’t hold everything (there is, as Saladino remarks, no Svalbard seed vault for chickens) and are anyway a desperate measure. Saladino’s tale of how, come the Allied invasion, the holdings of Iraq’s national seed bank at Abu Ghraib was bundled off to Tel Hadya in Syria, only then to be frantically transferred to Lebanon, itself an increasingly unstable state, sounds a lot more more Blade Runner 2049 then Agronomy 101.

Better to create a food system that, while not necessarily promoting rare foods (fancy some Faroese air-fermented sheep meat? — thought not) will at least not drive such foods to extinction.

The argument is a little bit woolly here, as what the Faroe islanders get up to with their sheep is unlikely to have global consequences for the world’s food supply. Letting a crucial drought-resistant strain of wheat go extinct in a forgotten corner of Afghanistan, on the other hand, could have unimaginably dire consequences for us in the future.
Saladino’s grail is a food system with enough diversity in it to adapt to environmental change and withstand the onslaught of disease.

Is such a future attainable? Only to a point. Some wild foods are done for already because the high prices they command incentivize their destruction. If you want some of Baca’s prized and pungent bear root, native to a corner of Colorado, you’d better buy it now (but please, please don’t).

Rare cultivated foods stand a better chance. The British Middle White pig is rarer than the Himalayan snow leopard, says Saladino, but the stocks are sustainable enough that it is now being bred for the table.

Attempting to encompass the Sixth Extinction on the one hand, and the antics of slow-foodies like Mammel and Baca on the other is a recipe for cognitive dissonance. In the end, though, Saladino succeeds in mapping the enormity of what human appetite has done to the planet.

Saladino says we need to preserve rare and forgotten foods, partly because they are part of our cultural heritage, but also, and more hard-headedly, so that we can study and understand them, crossing them with existing lines to shore up and enrich our dangerously over-simplified food system. He’s nostalgic for our lost food past (and who doesn’t miss apples that taste of apples?) but he doesn’t expect us to delete Deliveroo and spend our time grubbing around for roots and berries.

Unless of course it’s all to late. It would not take many wheat blights or avian flu outbreaks before slow food is all that’s left to eat.

 

The tools at our disposal

Reading Index, A History of the, by Dennis Duncan, for New Scientist, 15 September 2021

Every once in a while a book comes along to remind us that the internet isn’t new. Authors like Siegfried Zielinski and Jussi Parikka write handsomely about their adventures in “media archaeology”, revealing all kinds of arcane delights: the eighteenth-century electrical tele-writing machine of Joseph Mazzolari; Melvil Dewey’s Decimal System of book classification of 1873.

It’s a charming business, to discover the past in this way, but it does have its risks. It’s all too easy to fall into complacency, congratulating the thinkers of past ages for having caught a whiff, a trace, a spark, of our oh-so-shiny present perfection. Paul Otlet builds a media-agnostic City of Knowledge in Brussels in 1919? Lewis Fry Richardson conceives a mathematical Weather Forecasting Factory in 1922? Well, I never!

So it’s always welcome when an academic writer — in this case London based English lecturer Dennis Duncan — takes the time and trouble to tell this story straight, beginning at the beginning, ending at the end. Index, A History of the is his story of textual search, told through charming portrayals of some of the most sophisticated minds of their era, from monks and scholars shivering among the cloisters of 13th-century Europe to server-farm administrators sweltering behind the glass walls of Silicon Valley.

It’s about the unspoken and always collegiate rivalry between two kinds of search: the subject index (a humanistic exercise, largely un-automatable, requiring close reading, independent knowledge, imagination, and even wit) and the concordance (an eminently automatable listing of words in a text and their locations).

Hugh of St Cher is the father of the concordance: his list of every word in the bible and its location, begun in 1230, was a miracle of miniaturisation, smaller than a modern paperback. It and its successors were useful, too, for clerics who knew their bibles almost by heart.

But the subject index is a superior guide when the content is unfamiliar, and it’s Robert Grosseteste (born in Suffolk around 1175) who we should thank for turning the medieval distinctio (an associative list of concepts, handy for sermon-builders), into something like a modern back-of-book index.

Reaching the present day, we find that with the arrival of digital search, the concordance is once again ascendant (the search function, Ctl-F, whatever you want to call it, is an automated concordance), while the subject index, and its poorly recompensed makers, are struggling to keep up in an age of reflowable screen text. (Sewing embedded active indexes through a digital text is an excellent idea which, exasperatingly, has yet to catch on.)

Running under this story is a deeper debate, between people who want to access their information quickly, and people (especially authors) who want people to read books from beginning to end.

This argument about how to read has been raging literally for millennia, and with good reason. There is clear sense in Socrates’ argument against reading itself, as recorded in Plato’s Phaedrus (370 BCE): “You have invented an elixir not of memory, but of reminding,” his mythical King Thamus complains. Plato knew a thing or two about the psychology of reading, too: people who just look up what they need “are for the most part ignorant,” says Thamus, “and hard to get along with, since they are not wise, but only appear wise.”

Anyone who spends too many hours a day on social media will recognise that portrait — if they have not already come to resemble it.

Duncan’s arbitration of this argument is a wry one. Scholarship, rather than being timeless and immutable, “is shifting and contingent,” he says, and the questions we ask of our texts “have a lot to do with the tools at our disposal.”

One courageous act

Watching A New World Order for New Scientist, 8 September 2021

“For to him that is joined to all the living there is hope,” runs the verse from Ecclesiastes, “for a living dog is better than a dead lion.”

Stefan Ebel plays Thomasz, the film’s “living dog”, a deserter who, more frightened than callous, has learned to look out solely for himself.

In the near future, military robots have turned against their makers. The war seems almost over. Perhaps Thomasz has wriggled and dodged his way to the least settled part of the planet (Daniel Raboldt’s debut feature is handsomely shot in Arctic Finland by co-writer Thorsten Franzen). Equally likely, this is what the whole planet looks like now: trees sweeping in to fill the spaces left by an exterminated humanity.

You might expect the script to make this point clear, but there is no script; rather, there is no dialogue. The machines (wasp-like drones, elephantine tripods, and one magnificent airborne battleship that that would not look out of place in a Marvel movie) target people by listening out for their voices; consequently, not a word can be exchanged between Thomasz and his captor Lilja, played by Siri Nase.

Lilja takes Thomasz prisoner because she needs his brute strength. A day’s walk away from the questionable safety of her log cabin home, there is a burned-out military convoy. Amidst the wreckage and bodies, there is a heavy case — and in the case, there is a tactical nuke. Lilja needs Thomasz’s help in dragging it to where she can detonate it, perhaps bringing down the machines. While Thomasz acts out of fear, Lilja is acting out of despair. She has nothing more to live for. While Thomasz wants to live at any cost, Lilja just wants to die. Both are reduced to using each other. Both will have to learn to trust again.

In 2018, John Krasinski’s A Quiet Place arrived in cinemas — a film in which aliens chase down every sound and slaughter its maker. This cannot have been a happy day for the devoted and mostly unpaid German enthusiasts working on A New World Order. But silent movies are no novelty, and theirs has clearly ploughed its own furrow. The film’s sound design, by Sebastian Tarcan, is especially striking, balancing levels so that even a car’s gear change comes across as an imminent alien threat. (Wonderfully, there’s an acknowledging nod to the BBC’s Tripods series buried in the war machines’ emergency signal.)

Writing good silent film is something of a lost art. It’s much easier for writers to explain their story through dialogue, than to propel it through action. Maybe this is why silent film, done well, is such a powerful experience. There is a scene in this movie where Thomasz realises, not only that he has to do the courageous thing, but that he is at last capable of doing it. Ebel, on his own on a scree-strewn Finnish hillside, plays the moment to perfection.

Somewhere on this independent film’s long and interrupted road to distribution (it began life on Kickstarter in 2016) someone decided “A Living Dog” was too obscure a film title for these godless times — a pity, I think, and not just because “A New World Order”, the title picked for UK distribution, manages to be at once pompous and meaningless.

Ebel’s pitch-perfect performance drips guilt and bad conscience. In order to stay alive, he has learned to crawl about the earth. But Lilja’s example, and his own conscience, will turn dog to lion at last, and in a genre that never tires of presenting us with hyper-capable heroes, it’s refreshing, on this occasion, to follow the forging of one courageous act.

“This stretch-induced feeling of awe activates our brain’s spiritual zones”

Reading Angus Fletcher’s Wonderworks: Literary invention and the science of stories for New Scientist, 1 September 2021

Can science explain art?

Certainly: in 1999 the British neurobiologist Semir Zeki published Inner Vision, an illuminating account of how, through trial and error and intuition, different schools of art have succeeded in mapping the neurological architectures of human vision. (Put crudely, Rembrandt tickles one corner of the brain, Piet Mondrian another.)

Twelve years later, Oliver Sacks contributed to an already crowded music psychology shelf with Musicophilia, a collection of true tales in which neurological injuries and diseases are successfully treated with music.

Angus Fletcher believes the time has come for drama, fiction and literature generally to succumb to neurological explanation. Over the past decade, neuroscientists have been using pulse monitors, eye-trackers, brain scanners “and other gadgets” to look inside our heads as we consume novels, poems, films, and comic books. They must have come up with some insights by now.

Fletcher’s hypothesis is that story is a technology, which he defines as “any human-made thing that helps to solve a problem”.

This technology has evolved, over at least the last 4000 years, to help us negotiate the human condition, by which Fletcher means our awareness of our own mortality, and the creeping sense of futility it engenders. Story is “an invention for overcoming the doubt and the pain of just being us”.

Wonderworks is a scientific history of literature; each of its 25 chapters identifies a narrative “tool” which triggers a different, traceable, evidenced neurological outcome. Each tool comes with a goofy label: here you will encounter Butterfly Immersers and Stress Transformers, Humanity Connectors and Gratitude Multipliers.

Don’t sneer: these tools have been proven “to alleviate depression, reduce anxiety, sharpen intelligence, increase mental energy, kindle creativity, inspire confidence, and enrich our days with myriad other psychological benefits.”

Now, you may well object that, just as area V1 of the visual cortex did not evolve so we could appreciate the paintings of Piet Mondrian, so our capacity for horror and pity didn’t arise just so we could appreciate Shakespeare. So if story is merely “holding a mirror up to nature”, then Fletcher’s long, engrossing book wouldn’t really be saying anything.

As any writer will tell you, of course, a story isn’t merely a mirror. The problem comes when you try and make this perfectly legitimate point using neuroscience.

Too often for comfort, and as the demands of concision exceed all human bounds, the reader will encounter passages like: “This stretch-induced feeling of awe activates our brain’s spiritual zones, enriching our consciousness with the sensation of meanings beyond.”

Hitting sentences like this, I normally shut the book, with some force. I stayed my hand on this occasion because, by the time this horror came to light, two things were apparent. First, Fletcher — a neuroscientist turned story analyst — actually does know his neurobiology. Second, he really does know his literature, making Wonderworks a profound and useful guide to reading for pleasure.

Wonderworks fails as popular science because of the extreme parsimony of Fletcher’s explanations; fixing this problem would, however, have involved composing a multi-part work, and lost him his general audience.

The first person through the door is the one who invariably gets shot. Wonderworks is in many respects a pug-ugly book. But it’s also the first of its kind: an intelligent, engaged, erudite attempt to tackle, neurologically, not just some abstract and simplified “story”, but some the world’s greatest literature, from the Iliad to The Dream of the Red Chamber, from Disney’s Up to the novels of Elena Ferrante.

It is easy to get annoyed with this book. But those who stay calm will reap a rich harvest.

A cherry is a cherry is a cherry

Life is Simple: How Occam’s Razor Sets Science Free and Shapes the Universe
by Johnjoe McFadden, reviewed for the Spectator, 28 August 2021

Astonishing, where an idea can lead you. You start with something that, 800 years hence, will sound like it’s being taught at kindergarten: Fathers are fathers, not because they are filled with some “essence of fatherhood”, but because they have children.

Fast forward a few years, and the Pope is trying to have you killed.

Not only have you run roughshod over his beloved eucharist (justified, till then, by some very dodgy Aristotelian logic-chopping); you’re also saying there’s no “essence of kinghood”, neither. If kings are only kings because they have subjects, then, said William of Occam, “power should not be entrusted to anyone without the consent of all”. Heady stuff for 1334.

How this progression of thought birthed the very idea of modern science, is the subject of what may be the most sheerly enjoyable history of science of recent years.

William was born around 1288 in the little town of Ockham in Surrey. He was probably an orphan; at any rate he was given to the Franciscan order around the age of eleven. He shone at Greyfriars in London, and around 1310 was dispatched to Oxford’s newfangled university.

All manner of intellectual, theological and political shenanigans followed, mostly to do with William’s efforts to demolish almost the entire edifice of medieval philosophy.

It needed demolishing, and that’s because it still held to Aristotle’s ideas about what an object is. Aristotle wondered how single objects and multiples can co-exist. His solution: categorise everything. A cherry is a cherry is a cherry, and all cherries have cherryness in common. A cherry is a “universal”; the properties that might distinguish one cherry from another are “accidental”.

The trouble with Aristotle’s universals, though, is that they assume a one-to-one correspondence between word and thing, and posit a universe made up of a terrifying number of unique things — at least one for each noun or verb in the language.

And the problem with that is that it’s an engine for making mistakes.

Medieval philosophy relied largely on syllogistic reasoning, juggling things into logical-looking relations. “Socrates is a man, all men are mortal, so Socrates is mortal.”

So he is, but — and this is crucial — this conclusion is arrived at more by luck than good judgement. The statement isn’t “true” in any sense; it’s merely internally consistent.

Imagine we make a mistake. Imagine we spring from a society where beards are pretty much de rigeur (classical Athens, say, or Farringdon Road). Imagine we said, “Socrates is a man, all men have beards, therefore Socrates has a beard”?

Though one of its premises is wrong, the statement barrels ahead regardless; it’s internally consistent, and so, if you’re not paying attention, it creates the appearance of truth.

But there’s worse: the argument that gives Socates a beard might actually be true. Some men do have beards. Socrates may be one of them. And if he is, that beard seems — again, if you’re not paying attention — to confirm a false assertion.

William of Occam understood that our relationship with the world is a lot looser, cloudier, and more indeterminate than syllogistic logic allows. That’s why, when a tavern owner hangs a barrel hoop outside his house, passing travellers know they can stop there for a drink. The moment words are decoupled from things, then they act as signs, negotiating flexibly with a world of blooming, buzzing confusion.

Once we take this idea to heart, then very quickly — and as a matter of taste more than anything — we discover how much more powerful straightforward explanations are than complicated ones. Occam came up with a number of versions of what even then was not an entirely new idea: “It is futile to do with more what can be done with less,” he once remarked. Subsequent formulations do little but gild this lily.

His idea proved so powerful, three centuries later the French theologian Libert Froidmont coined the term “Occam’s razor”, to describe how we arrive at good explanations by shaving away excess complexity. As McFadden shows, that razor’s still doing useful work.

Life is Simple is primarily a history of science, tracing William’s dangerous idea through astronomy, cosmology, physics and biology, from Copernicus to Brahe, Kepler to Newton, Darwin to Mendel, Einstein to Noether to Weyl. But McFadden never loses sight of William’s staggering, in some ways deplorable influence over the human psyche as a whole. For if words are independent of things, how do we know what’s true?

Thanks to William of Occam, we don’t. The universe, after Occam, is unknowable. Yes, we can come up with explanations of things, and test them against observation and experience; but from here on in, our only test of truth will be utility. Ptolemy’s 2nd-century Almagest, a truly florid description of the motions of the stars and planetary paths, is not and never will be *wrong*; the worst we can say is that it’s overcomplicated.

In the Coen brothers’ movie The Big Lebowski, an exasperated Dude turns on his friend: “You’re not *wrong*, Walter” he cries, “you’re just an asshole.” William of Occam is our universal Walter, and the first prophet of our disenchantment. He’s the friend we wish we’d never listened to, when he told us Father Christmas was not real.

The Art of Conjecturing

Reading Katy Börner’s Atlas of Forecasts: Modeling and mapping desirable futures for New Scientist, 18 August 2021

My leafy, fairly affluent corner of south London has a traffic congestion problem, and to solve it, there’s a plan to close certain roads. You can imagine the furore: the trunk of every kerbside tree sports a protest sign. How can shutting off roads improve traffic flows?

The German mathematician Dietrich Braess answered this one back in 1968, with a graph that kept track of travel times and densities for each road link, and distinguished between flows that are optimal for all cars, and flows optimised for each individual car.

On a Paradox of Traffic Planning is a fine example of how a mathematical model predicts and resolves a real-world problem.

This and over 1,300 other models, maps and forecasts feature in the references to Katy Börner’s latest atlas, which is the third to be derived from Indiana University’s traveling exhibit Places & Spaces: Mapping Science.

Atlas of Science: Visualizing What We Know (2010) revealed the power of maps in science; Atlas of Knowledge: Anyone Can Map (2015), focused on visualisation. In her third and final foray, Börner is out to show how models, maps and forecasts inform decision-making in education, science, technology, and policymaking. It’s a well-structured, heavyweight argument, supported by descriptions of over 300 model applications.

Some entries, like Bernard H. Porter’s Map of Physics of 1939, earn their place thanks purely to their beauty and for the insights they offer. Mostly, though, Börner chooses models that were applied in practice and made a positive difference.

Her historical range is impressive. We begin at equations (did you know Newton’s law of universal gravitation has been applied to human migration patterns and international trade?) and move through the centuries, tipping a wink to Jacob Bernoulli’s “The Art of Conjecturing” of 1713 (which introduced probability theory) and James Clerk Maxwell’s 1868 paper “On Governors” (an early gesture at cybernetics) until we arrive at our current era of massive computation and ever-more complex model building.

It’s here that interesting questions start to surface. To forecast the behaviour of complex systems, especially those which contain a human component, many current researchers reach for something called “agent-based modeling” (ABM) in which discrete autonomous agents interact with each other and with their common (digitally modelled) environment.

Heady stuff, no doubt. But, says Börner, “ABMs in general have very few analytical tools by which they can be studied, and often no backward sensitivity analysis can be performed because of the large number of parameters and dynamical rules involved.”

In other words, an ABM model offers the researcher an exquisitely detailed forecast, but no clear way of knowing why the model has drawn the conclusions it has — a risky state of affairs, given that all its data is ultimately provided by eccentric, foible-ridden human beings.

Börner’s sumptuous, detailed book tackles issues of error and bias head-on, but she left me tugging at a still bigger problem, represented by those irate protest signs smothering my neighbourhood.

If, over 50 years since the maths was published, reasonably wealthy, mostly well-educated people in comfortable surroundings have remained ignorant of how traffic flows work, what are the chances that the rest of us, industrious and preoccupied as we are, will ever really understand, or trust, all the many other models which increasingly dictate our civic life?

Borner argues that modelling data can counteract misinformation, tribalism, authoritarianism, demonization, and magical thinking.

I can’t for the life of me see how. Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” What happens when a model reaches such complexity, only an expert can really understand it, or when even the expert can’t be entirely sure why the forecast is saying what it’s saying?

We have enough difficulty understanding climate forecasts, let alone explaining them. To apply these technologies to the civic realm begs a host of problems that are nothing to do with the technology, and everything to do with whether anyone will be listening.

The old heave-ho

The Story of Work: A New History of Humankind by Jan Lucassen, reviewed for the Telegraph 14 August 2021

“How,” asks Dutch social historian Jan Lucassen, “could people accept that the work of one person was rewarded less than that of another, that one might even be able to force the other to do certain work?”

The Story of Work is just that: a history of work (paid or otherwise, ritual or for a wage, in the home or out of it) from peasant farming in the first agrarian societies to gig-work in the post-Covid ruins of the high street, and spanning the historical experiences of working people on all five inhabited continents. The writing is, on the whole, much better than the sentence you just read, but no less exhausting. At worst, it put me in mind of the work of English social historian David Kynaston; super-precise prose stitched together to create an unreadably compacted narrative.

For all its abstractions, contractions and signposting, however, The Story of Work is full of colour, surprise and human warmth. What other social history do you know writes off the Industrial Revolution as a net loss to music? “Just think of the noise from rattling machines that made it impossible to talk,” Lucassen writes, “in contrast to small workplaces or among larger troupes of workers who mollified work in the open air by singing shanties and other work songs.”

For 98 per cent of our species’ history we lived lives of reciprocal altruism in hunting-and-gathering clan groups. With the advent of farming and the formation of the first towns came surpluses and, for the first time, the feasibility of distributing resources unequally.

At first, conspicuous generosity ameliorated the unfairnesses. As the sixteenth-century French judge Étienne de la Boétie wrote: “theatres, games, plays, spectacles, marvellous beasts, medals, tableaux, and other such drugs were for the people of antiquity the allurements of serfdom, the price of their freedom, the tools of tyranny.” (The Story of Work is full of riches of this sort: strip off the narrative, and there’s a cracking miscellany still to enjoy.)

Lucassen diverges from the popular narrative (in which the invention of agriculture is the fount of all our ills) on several points. First, agricultural societies do not inevitably become marketplaces. Bantu-speaking agriculturalists spread across central, eastern and southern Africa between 3500 BCE and 500 CE, while maintaining perfect equality. “Agriculture and egalitarianism are compatible,“ says Lucassen.

It’s not the crops, but the livestock, that are to blame for our expulsion from hunter-gatherer Eden. If notions of private property had to arise anywhere, they surely arose, Lucassen argues, among those innocent-looking shepherds and shepherdesses, whose waterholes may have been held in common but whose livestock most certainly were not. Animals were owned by individuals or households, whose success depended on them knowing every single individual in their herd.

Having dispatched the idea that agriculture made markets, Lucassen then demolishes the idea that markets made inequality. Inequality came first. It does not take much specialism to arise within a group before some acquire more resources than others. Managing this inequality doesn’t need anything so complex as a market. All it needs is an agreement. Lucassen turns to India, and the social ideologies that gave rise, from about 600 BC, to the Upanishads and the later commentaries on the Vedas: the evolving caste system, he says, is a textbook example of how human suffering can be explained to an entire culture’s satisfaction ”without victims or perpetrators being able to or needing to change anything about the situation”.

Markets, by this light, become a way of subverting the iniquitous rhetorics cooked up by rulers and their priests. Why, then, have markets not ushered in a post-political Utopia? The problem is not to do with power. It’s to do with knowledge. Jobs used to be *hard*. They used to be intellectually demanding. Never mind the seven-year apprenticeships of Medieval Europe, what about the jobs a few are still alive to remember? Everything, from chipping slate out of a Welsh quarry to unloading a cargo boat while maintaining its trim, took what seem now to be unfeasible amounts of concentration, experience and skill.

Now, though — and even as they are getting fed rather more, and rather more fairly, than at any other time in world history — the global proletariat are being starved, by automation, of the meaning of their labour. The bloodlessness of this future is not a subject Lucassen spends a great many words on, but it informs his central and abiding worry, which is that slavery — a depressing constant in his deep history of labour — remains a constant threat and a strong future possibility. The logics of a slave economy run frighteningly close to the skin in many cultures: witness the wrinkle in the 13th Amendment of the US constitution that legalises the indentured servitude of (largely black) convicts, or the profits generated for the global garment industry by interned Uighurs in China. Automation, and its ugly sister machine surveillance, seem only to encourage such experiments in carceral capitalism.

But if workers of the world are to unite, around what banner should they gather? Lucassen identifies only two forms of social agreement that have ever reconciled us to the unfair distribution of reward. One is redistributive theocracy. “Think of classical Egypt and the pre-Columbian civilizations,” he writes, “but also of an ‘ideal state’ like the Soviet Union.”

The other is the welfare state. But while theocracies have been sustained for centuries or even millennia, the welfare state, thus far, has a shelf life of only a few decades, and is easily threatened.

Exhausted yet enlightened, any reader reaching the end of Lucassen’s marathon will understand that the problem of work runs far deeper than politics, and that the grail of a fair society will only come nearer if we pay attention to real experiences, and resist the lure of utopias.

“It’s wonderful what a kid can do with an Erector Set”

Reading Across the Airless Wilds by Earl Swift for the Times, 7 August 2021

There’s something about the moon that encourages, not just romance, not just fancy, but also a certain silliness. It was there in spades at the conference organised by the American Rocket Society in Manhattan in 1961. Time Magazine delighted in this “astonishing exhibition of the phony and the competent, the trivial and the magnificent.” (“It’s wonderful what a kid can do with an Erector Set”, one visiting engineer remarked.)

But the designs on show thefre were hardly any more bizarre than those put forward by the great minds of the era. The German rocket pioneer Hermann Oberth wrote an entire book advocating a moon car that could, if necessary, pogo-stick about the satellite. When Howard Seifert, the American Rocket Society’s president, advocated abandoning the car and preserving the pogo stick — well, Siefert’s “platform” might not have made it to the top of NASA’s favoured designs for a moon vehicle, but it was taken seriously.

Earl Swift is not above a bit of fun and wonder, but the main job of Across the Airless Wilds (a forbiddingly po-faced title for such an enjoyable book) is to explain how the oddness of the place — barren, airless, and boasting just one-sixth Earth’s gravity — tended to favour some very odd design solutions. True, NASA’s lunar rover, which actually flew on the last three Apollo missions, looks relatively normal, like a car (or at any rate, a go-kart). But this was really to do with weight constraints, budgets and historical accidents; a future in which the moon is explored by pogo-stick is still not quite out of the running.

For all its many rabbit-holes, this is a clear and compelling story about three men: Sam Romano, boss of General Motors’s lunar program, his visionary off-road specialist Mieczyslaw Gregory Bekker (Greg to his American friends) and Greg’s invaluable engineer Ferenc (Frank) Pavlics. These three were toying with the possibility of moon vehicles a full two years before the US boasted any astronauts, and the problems they confronted were not trivial. Until Bekker came along, tyres, wheels and tracks for different surfaces were developed more or less through informed trial and error. It was Bekker who treated off-roading as an intellectual puzzle as rigorous as the effort to establish the relationship between a ship’s hull and water, or a plane’s wing and the air it rides.

Not that rigour could gain much toe-hold in the early days of lunar design, since no-one could be sure what the consistency of the moon’s surface actually was. It was probably no dustier than an Earthbound desert, but there was always the nagging possibility that a spacecraft and its crew, landing on a convenient lunar plain, might vanish into some ghastly talcum quicksand.

On 3 February 1966 the Soviet probe Luna 9 put paid to that idea, settling, firmly and without incident, onto the Ocean of Storms. Though their plans for a manned mission had been abandoned, the Soviets were no bit player. Four years later it was an eight-wheel Soviet robot, Lunokhod-17, that first drove across the moon’s surface. Seven feet long and four feet tall, it upstaged NASA’s rovers nicely, with its months and miles of journey time, 25 soil samples and literally thousands of photographs.

Meanwhile NASA was having to re-imagine its Lunar Roving Vehicle any number of times, as it sought to wring every possible ounce of value from a programme that was being slashed by Congress a good year before Neil Armstrong even set foot on the Moon.

Conceived when it was assumed Apollo would be the first chapter in a long campaign of exploration and settlement, the LRV was being shrunk and squeezed and simplified to fit through an ever-tightening window of opportunity. This is the historical meat of Swift’s book, and he handles the technical, institutional and commercial complexities of the effort with a dramatist’s eye.

Apollo was supposed to pave the way for two-rocket missions. When they vanished from the schedule, the rover’s future hung in doubt. Without a second Saturn to carry cargo, any rover bound for the moon would have to be carried on the same lunar module that carried the crew. No-one knew if this was even possible.

There was, however, one wedge-shaped cavity still free between the descent stage’s legs: an awkward triangle “about the size and shape of a pup tent standing on its end.” So it was that the LRV, tht once boasted six wheels and a pressurised cabin, ended up the machine a Brompton folding bike wants to be when it grows up.

Ironically, it was NASA’s dwindling prospects post-Apollo that convinced its managers to origami something into that tiny space, just a shade over seventeen months prior to launch. Why not wring as much value out of Apollo’s last missions as possible?

The result was a triumph, though it maybe didn’t look like one. Its seats were basically deckchairs. It had neither roof, nor body. There was no steering wheel, just a T-bar the astronaut lent on. It weighed no more than one fully kitted-out astronaut, and its electric motors ground out just one horsepower. On the flat, it reached barely ten miles an hour.

But it was superbly designed for the moon, where a turn at 6MPH had it fishtailing like a speedboat, even as it bore more than twice its weight around an area the size of Manhattan.

In a market already oversaturated with books celebrating the 50th anniversary of Apollo in 2019 (many of them very good indeed) Swift finds his niche. He’s not narrow: there’s plenty of familiar context here, including a powerful sketch of the former Nazi rocket scientist Wernher von Braun. He’s not especially folksy, or willfully eccentric: the lunar rover was a key element in the Apollo program, and he wants it taken seriously. Swift finds his place by much more ingenious means — by up-ending the Apollo narrative entirely (he would say he was turning it right-side up) so that every earlier American venture into space was preparation for the last three trips to the moon.

He sets out his stall early, drawing a striking contrast between the travails of Apollo 14 astronauts Alan Shepard Jr and Edgar Mitchell — slugging half a mile up the the wall of the wrong crater, dragging a cart — with the vehicular hijinks of Apollo 15’s Dave Scott and Jim Irwin, crossing a mile of hummocky, cratered terrain rimmed on two sides by mountains the size of Everest, to a spectacular gorge, then following its edge to the foot of a huge mountain, then driving up its side.

Detailed, thrilling accounts of the two subsequent Rover-equipped Apollo missions, Apollo 16 in the Descartes highlands and Apollo 17 in the Taurus-Littrow Valley, carry the pointed message that the viewing public began to tune out of Apollo just as the science, the tech, and the adventure had gotten started.

Swift conveys the baffling, unreadable lunar landscape very well, but Across the Airless Wilds is above all a human story, and a triumphant one at that, about NASA’s most-loved machine. “Everybody you meet will tell you he worked on the rover,” remarks Eugene Cowart, Boeing’s chief engineer on the project. “You can’t find anybody who didn’t work on this thing.”