“Look at me, top of the leader board!”

Reading Paul Mullen’s Running Amok for the Telegraph, 27 September 2025

Paul Mullen has spent years trying to understand the internal world of the lone mass killer: the sort of person who draws their weapon in a school, on a factory floor, or at a supermarket. In this pursuit, says Mullen – a forensic psychiatrist – we should remember, and admit, that everyone has the odd unpleasant impulse from time to time: it’s part of being human. So, he writes, when discussing the most sickening criminals, we mustn’t “endow perfectly normal mental mechanisms with a pathological, sinister significance”.

For example, many of us feel undervalued. Many of us feel in possession of skills and attributes that, in a better world, would surely bring us recognition. Who among us has not looked in the mirror and met a creature consumed by resentment or depression? Life can be crushing, and as Mullen says, “disappointed, egotistical, isolated and rigid men are ten-a-penny.” Pushed to the edge, they’re much more likely to put an end to themselves than go out in a blaze of vicious “glory”. (The male suicide rate in the UK last year was 17.4 deaths per 100,000 people, vastly larger than the rate of male deaths by homicide.)

Mullen is best known for his research into the link between common-or-garden jealousy and the obsessional, sometimes homicidal, behaviour of stalkers. He takes a similar tack in Running Amok, a devastating compendium of mass killings, arranged by locale and severity. Many lone mass killers, we learn, are persistent whiners – “querulants” is Mullen’s term-of-art – which led me to wonder what our burgeoning culture of complaint is doing to stoke their fires. From those who self-righteously pursue their grievances to others who seem to live fantasies of battling persecution, you wonder how thin the cognitive dividing-line can safely grow. And yet: whether or not the world is filling up with narcissistic whiners, most of them don’t turn to slaughter. So what leads a handful to make that change? Or, to put it another way, what actuates them even more than what, in truth, are the perfectly common means (guns, vehicles, knives), and motive (the desire for “a semblance of power and significance”)?

Mullen, who has met a wide range of criminals across his professional career, and was the first non-military defence expert to gain access to the detention centre at Guantanamo Bay, points the finger at the availability of an incident the would-be killer can emulate: what Mullen calls a “social script”. In his experience, mass killers are invariably fixated on reports of previous massacres; also on their fictional depiction. Rambo is a fine movie, intelligently written, but there’s a reason the DVD keeps turning up on the shelves of such people.

As societies change, so do the scripts they make available to the despondent, the despairing, the rejected and the humiliated. In the 1970s and early 1980s, homicidal losers used to fixate on a belief; now they’re more likely to kill in the name of a group. The 2016 Orlando killer Omar Mateen claimed allegiance to both Isis and Hizbullah: a neat trick, given how violently these groups are opposed to each other. In this shift from ideology to tribe, Mullen detects the influence of the internet, with its pseudo-communities of extremists desperate to represent some persecuted minority.

The other essential characteristic of these scripts is that they are self-perpetuating. Killers inspire killers. It’s why Mullen won’t mention the killers he’s writing about by name, a tactic that gives the reader the initial impression – quickly dispelled – that the author is only marginally acquainted with his subject matter. On the contrary, Mullen anatomises, with skill and a certain amount of garrulousness, what seems a desperately intractable problem, noting in particular the inflammatory influence of a predominantly on-line incel culture, the depredations of the attention economy, and the addictiveness of certain videogames. The violence or otherwise of these games is not at issue: much more important is their ability to offer the pathologically lonely a semblance of social validation: “Look at me, top of the leader board!” Internet tribalisms of all sorts service the lone killer’s need to belong — and not just to belong, but to crawl to the top of some specious hierarchy. “I’ve got the record, haven’t I?” was practically the first question Martin Bryant asked after shooting and killing 35 people and injuring 23 others in the Tasmanian tourist town of Port Arthur in 1996.

So much for sociology. Mullen would sooner engage with the extreme inner worlds of lone mass killers than explain them away with platitudes. Whatever maddened these people in the first place (and let’s face it, some people are just born miserable), by the time mass homicide seems like a solution to their problems, they are, by any common definition of the term, mad, and should be treated as such.

This is where Mullen turns to discuss, of all people, Queen Victoria. Across her long reign, she was the victim of eight assassination attempts. By the time she died, entirely peacefully, the Metropolitan Police had learned that the most effective strategy for avoiding or mitigating attacks on a permanently public target was, as far as possible, to dampen down publicity. Ever since, would-be regicides have been arrested without fanfare, and often ushered into psychiatric treatment. Thus, within the bounds of law, a security issue has been turned into a public-health one.

Mullen would like to see potential lone mass killers spotted and treated in much the same way. He proposes a Threat Assessment and Response Centre (targeted on random killings) modelled on the Met’s Fixated Threat Assessment Centre, which handles the security of known targets. Faced with a credible threat, the Centre should be given access to the suspect’s police and medical records and their internet history. Why? Because identification is ninety per cent of the battle. Treatment, by comparison, is startingly simple: obsessives on the path to atrocity are, in Mullen’s experience, remarkably cooperative and frank with those who’ve managed to stop them.

At the time of writing, there have been just over 300 mass shootings in America so far, and while gun-control laws may have preserved Britain and other Western countries from that specific plague, a spate of vehicle ramming attacks in Nice, Berlin, London, Barcelona, Stockholm and other European cities have left us, and our security services, in a state of hypervigilance.

So can we do anything? Mullen wants us to overcome our reticence and take seriously the threats made by miserable obsessives. False alarms will be raised, but psychologists aren’t witch-finders, Mullen assures us — in fact much of his time is spent avoiding the false attribution of madness in the people he meets.

I fear public awareness won’t do much good, however. Now that civic society has declared war on nuance and arrests people (or, Graham Linehan, anyway) for jokes, how can any of us be expected to hear the signal over the noise?

Dig another hole

Reading The Secret History of Gold by Dominic Frisby for the Telegraph, 22 August 2025

used to sift this soft, off-colour metal out of the beds of streams. Then, once the streams ran out of the stuff, we dug it out of the ground. These days, lest the smallest grain elude us, we gather the stuff by leaching gold out of the ore and into a solution of cyanide. Then we precipitate it back into a solid, melt it down and, says American investor Warren Buffet rather wryly, “dig another hole, bury it again and pay people to stand around guarding it. It has no utility. Anyone watching from Mars would be scratching their head.”

Well, not quite. Gold is useful. It’s a gift to artists: soft enough to work without fire (which is why it was the first metal we ever harnessed), and chemically so stable that it never tarnishes. No matter what we make of the stuff, though, sooner or later, as renaissance goldsmith Benvenuto Cellini and countless anonymous Inca artists will attest, someone else may come along and melt it down. “Gold may last,” says Dominic Frisby – a curious chap, part financial journalist, part stand-up comedian – “but art made from gold rarely does.”

This is Frisby’s real subject in his affable, opinionated new book, The Secret History of Gold: gold is fungible. An ounce of gold is equal in value to every other ounce of gold. Since it doesn’t corrode, rust, or tarnish, and since it resists most common acids. we can melted and recast and melt it again, and still end up with an ounce of gold. This makes the element about as honest a medium of exchange as the physical world has to offer – a point that besotted the Spanish conquistadors who melted down enough South American artwork into bullion to destroy their own empire’s balance of payments, and has been lost on few serious politicians since. “We have gold,” said Herbert Hoover in 1933, “because we cannot trust governments.”

Frisby himself is no fan of the State. His 2013 book Life After the State was subtitled “Why We Don’t Need Government”. His 2019 book about tax was called Daylight Robbery. Frisby’s is a sentimental conservatism, weaponised on stage in ditties such as the Brexit victory song “Seventeen Million F—-Offs” (a treasurable joy, whatever your politics), and reasoned out in books that offer up cogent entertainment, even if they don’t always convince.

The Secret History of Gold is another addition to that trend. As a history, his tales – whether topical, historical or mythological – are well-turned, comprehensive and occasionally surprising. King Midas’s “touch”, we learn, was a just-so story cooked up to explain the unreal amounts of gold discovered in the bed of the river Pactolus. Alexander the Great created the world’s first standardised currency by adopting consistent weights for gold and silver coins across his territories. A latter-day alchemist called Heinz Kurschildgen was prosecuted for fraud in 1922, after convincing several investors that he could turn base metal into gold; later, he convinced Heinrich Himmler that he could make petrol out of water too.
Intellectual property is now so frictionless that the business of “gathering tales” is something any fool can do by pressing a key. Frisby is perfectly entitled to rehash many of the tales from Timothy Green’s 2007 The Ages of Gold, since he recognises that debt in his end-notes. This isn’t inherently a problem – how else but by reading do books get made? – but the internet has made us all potentially that erudite. What matters, then, with books such as Frisby’s, is less how much you know than how much fun you have with what you’ve assimilated. Thankfully, Frisby entertains here, impressively and convincingly so. It’s just that it seems a bit silly for Frisby and his publishers to call a book such as this a “secret history”, when it’s simply combining accessible materials into a compelling new weave.
Each story in that weave, at least, does inform Frisby’s argument and obsession – that the world (or, failing the that, Britain) might return to a gold standard. This is the business of tying a country’s currency to a fixed quantity of gold, so that its paper money can in theory be exchanged for the metal. Pegging currencies to the value of gold certainly makes life simple, or at least, it seems to, if you don’t pay too much attention to the prospecting and the mining, the pillaging, the counterfeiting and the fraud. Frisby wouldn’t dream of skirting such a rich source of entertainment, and his tales of German and Japanese gold-hunting during the Second World War are eye-popping. In Merkers salt mine, U.S. troops discovered a Nazi stash including over 100 tons of gold bullion, but General Tomoyuki Yamashita’s treasure horde, meant for Japan’s post-war rebuilding, remains untraced and untracerable.
It’s true that the gold standard stops governments from recklessly printing money and inflating the economy. And this, Frisby argues, is exactly what has happened, pretty much everywhere, again and again. Crippled by the costs of the first world war and the Great Depression, Britain was first to abandon the gold standard in 1931. But 1971 was when the rot really set in. Saddled with rising inflation, increasing trade deficits and the cost of the Vietnam War, Richard Nixon’s US abandoned the standard and took the rest of the world with it down the path of perdition; government after government has since then repeatedly devalued their currency on the world’s markets. Why else would houses cost 70 times more now than when I was born in 1965?
Frisby’s proposed cure is for the world to adopt cryptocurrency. Despite not being a material entity, like gold, a bitcoin is pure money – a bearer asset. When I hand you a bitcoin, its value is, as Frisby explains, immediately transferred from me to you. What’s not to like about digital gold? Well, for starters, manufacturing these magic numbers – mining these bitcoins – requires a lot of IT infrastructure and no small amount of energy, so it puts production in the hands of just a few powerful nations, rather as the gold standard put production in the hands of just a few gold-rich territories.
Frisby’s arguments for pegging currencies to a digital standard might also carry more weight if he were a little more realistic with himself, and with us, about why we left the gold one. By abandoning gold for a currency backed by empty promises – a fiat currency system – governments no longer have to manage the amount of gold they have. This means they can concentrate on stabilising prices, by controlling interest rates. They might not do a brilliant job of it, but it’s what made the difference between how we experienced the Wall Street Crash of 1929 and the much bigger but infinitely less ruinous crash of 2008.
Until cryptocurrency has caught up, Frisby is inclined to pin all our current economic woes on Nixon’s 1971 decision to abandon the gold standard. As an economic thesis, that’s not even wrong, just hopelessly insufficient. It fails to acknowledge the benefits of free trade that the fiat system has enabled, despite its difficulties, and leaves us wondering just how it is that since 1971, extreme poverty and infant mortality have dropped by more than two-thirds worldwide, while the number of children in primary school has grown from 2.3 million to over 700 million.
But Frisby is an entertainer, and the more he entertains, the more he’s likely to convince. He didn’t really need to lumber his book with the whole “secret history” shtick, and his yarns, ripping though they are, sometimes just get in the way. At its best, this book sets Frisby up as a colourful and sly adversary to contemporary financial and political pieties and sometimes – I would humbly suggest – to common sense. But even at his most eccentric, in his enthusiasm and wit, he’s a worthy adversary. I’m not sure, despite this book’s flaws, that one could really ask for more.

 

This isn’t High Noon

Reading Sheepdogs by Elliot Ackerman for The Telegraph, 18 August 2025

Hey! It looks like you are trying to shoot someone at point-blank range with a small 9mm pistol. Would you like help?

If you are going to kill someone with a 9mm pistol, it is very important that you stare at the ground as you make your approach. Next, raise your head until you are focused on your target’s centre mass. Think heart and organs. Avoid their eyes and — no, don’t draw, this isn’t High Noon — have the gun in your hand in your pocket, and shoot through the fabric of your suit. Now go and rehearse, and remember: practice makes perfect!

Elliot Ackerman knows something about skill acquisition, task analysis and work breakdown structure. He also knows about the mechanics of a SIG Sauer P938 micro-compact single-action concealed carry. In a crisis, though, hardware will play second-fiddle to the hours of practice. Sheepdogs is a bright and breezy thriller about prepared paramilitary types who know what they’re doing.

Ackerman’s background is such, even a confection like Sheepdogs begs to be read autobiographically. He served five tours of duty in Iraq and Afghanistan. He received the Silver Star, the Bronze Star for Valor, and the Purple Heart. He was also, for a little while, attached to the Ground Branch of the CIA’s Special Activities Division, and he has a whole lot of fun with that institution here, as “Uncle Tony”, a Division spook obsessed with Hyatt reward points, scrabbles about the globe looking for ways to pay the wages of off-the-books armies everywhere from Iraq to Somalia, Yemen to Ukraine.

Uncle Tony looks to have inspired the mess in which our heroes are here embroiled. Cheese (as in “the big cheese”, the most versatile pilot Afghanistan’s military ever produced, now working in a filling station) and former Marine Raider Skwerl (think “squirrel” — Marines can’t spell for shit — financially and reputationally ruined for whistleblowing on an intelligence FUBAR) are being paid to steal — sorry, repossess — that most reliable of thriller macguffins, a private jet.

But the handover in Marseille goes badly wrong, the jet’s owners seem to be stealing it from themselves, and Skwerl and Cheese soon find themselves out of the loop, out of pocket and decidedly out of luck, pursued back and forth across the Atlantic by a remarkably well-connected former Afghan security guard who’s out to avenge, well, something…

Ackerman has a lot of fun with that private plane, a Bombardier Challenger 600 that loses an aileron (a control flap) in a collision with a golf cart, and not long after has its leather and mahogany interior torn out by a famished grizzly bear. The business of hiding the fixing the plane brings in a couple of well-drawn side characters, the survivalist Just Shane and Ephraim, an excommunicated Amish handyman who whittles a replacement aileron out of wood (not as daft as it sounds). Cheese’s better half Fareeda (four months pregnant) and Skwerl’s much more frightening half Sinaed (a professional dominatrix) round out a cast just kooky and diverse enough — and small enough — to tick every box at Apple TV, who’ve paid seven figures to develop Sheepdogs as a series.

Announcements of the novel’s bright televisual future make it slightly tricky to review, since what makes perfect sense for the IP doesn’t necessarily play well on the page. Ackerman is determined not to create any monsters here; he’s much more interested in telling — in the gentlest manner imaginable — broader truths about modern warfare, its commercial imperatives and human toll.

After dozing through tosh like Citadel and The Night Agent, we’ll surely lap up a TV thriller created by someone who knows guns, and better still, understands the men who wield them. That said, I can’t but deplore a literary thriller that leaves all my favourite characters standing, and not just standing, chatting, and not just chatting, understanding each other.

Well, you don’t make an omelette without cracking eggs, I suppose. I can remember when, in 1987, a fine literary writer called James Lee Burke wrote a detective novel about Dave Robicheaux. I adored Burke’s early books, but nearly forty years and over two dozen outings later, I’m hardly going to sit here and say that palling up with Dave was a backward move, now, am I?

Besides, Ackerman’s literary career has been sliding about all over the place, from brilliant memoirs of combat in Afghanistan (and don’t get him started about that catastrophic US withdrawal in August 2021) to best-selling geopolitical thrillers with James Stavridis, a retired US admiral, to clotted oddities like 2023’s Halcyon, a family drama set in an alternate Gore-led America that has cured death. The thriller genre has its limitations, but one of the very best things it can do is give writers a point of focus, who would otherwise go off like a box of firecrackers.

The trouble with Sheepdogs — a thriller that lacks excitement, a comedy without much in the way of humour, and a story about the wages of war that eludes depth — is that it shows its writer still shuffling up to the starting line and sucking on the water bottle. I know I shouldn’t second-guess Ackerman’s intentions. But I hope there will be sequels, and that Sheepdogs becomes a long project for him. Keeping up with the small screen will do him good. Remember: practice makes perfect!

How not to save an elephant

Reading Saabira Chaudhuri’s Consumed for the Telegraph, 16 May 2025 

What happens when you use a material that lasts hundreds of years to make products designed to be used for just a few seconds?

Saabira Chaudhuri is out to tell the story of plastic packaging. The brief sounds narrow enough, but still, Chaudhuri must touch upon a dizzying number of related topics, from the science of polymers to the sociology of marketing.

Consumed is a engaging read, fuelled by countless interviews and field trips. Chaudhuri, a reporter for the Wall Street Journal, brings a critical but never censorious light to bear on the values, ambitions and machinations of (mostly US) businesspeople, regulators, campaigners and occasional oddballs. Medical and environmental concerns are important elements, but hers is primarily a story about people: propelled to unexpected successes, and stymied by unforeseen problems, even as they unwittingly steep the world in chemicals causing untold damage to us all.

Plastic was once seen as the environmental option. With the advent of celluloid snooker balls and combs in the 1860s, who knows how many elephant and tortoise lives were saved? Lighter and stronger than paper (which is far more polluting and resource-intensive than we generally realise), what was not to like about plastic packaging? When McDonald’s rolled out its polystyrene clamshell container across the US in 1975, the company reckoned four billion of the things sitting in landfill was a good thing, since they’d “help aerate the soil”.

The idea was fanciful and self-serving, but not ridiculous, the prevailing assumption then being that landfills worked as giant composters. But landfill waste is not decomposed so much as mummified, so mass and volume count: around half of all landfill is paper, while plastic takes up only a few per cent of the space. This mattered when US landfill seemed in short supply (a crisis that proved fictitious — waste management companies were claiming a shortage so they could jack up their fees.)

The historical (and wrong) assumption that plastics were environmentally inert bolstered a growing post-war enthusiasm for the use-and-discard lifestyle. In November 1963, Lloyd Stouffer, the editor of Modern Plastics magazine, addressed hundreds at a conference in Chicago: ‘The happy day has arrived when nobody any longer considers the plastics package too good to throw away.’

A 2015 YouTube video of a turtle found off the coast of Costa Rica with a plastic straw lodged in his nose highlighted how plastic, so casually disposed of, destroyed the living world, entangling animals, blocking their guts, and rupturing their internal organs.

Rather than abandon disposability, however, manufacturers were looking for ways to make plastics recyclable, or at least compostable, and this is where the trouble really began, and commercial imperatives took a truly Machiavellian turn. Recycling plastic is hard to do, and Chaudhuri traces the ineluctable steps in business logic that reduced much of that effort into nothing more than a giant marketing campaign: what she dubs “a get-out-of-jail-free card in a situation otherwise riddled with reputational risk.”

Recycling cannot, in any event, address the latest crisis to hit the industry, and the world. In 2024 the New England Journal of Medicine published an article linking the presence of microplastics to an increased risk of stroke or heart attack, substantiating the suspicion that plastic particles ranging in size from 5,000 micrometres to just 0.001 micrometres, are harmful to health and the environment.

First, they turned up in salt, honey, teabags and beer, but in time, says Chaudhuri, “microplastics were found in human blood, breast milk, placentas, lungs, testes and the brain.”

Then there are the additives that lend a plastic its particular qualities (flexibility, strength, colour and so on); these disrupt endocrine function, contributing to declining fertility in humans — and not just humans — all around the world. The additives transmigrate and degrade in unpredictable ways (not least when being recycled), to the point where no-one has any idea what and how many chemicals we’re exposing ourselves to. How can we protect ourselves against substances we’ve never even seen, never mind studied?

But it’s the humble plastic sachet — developed in India to serve a market underserved by refuse collectors and low on running water — that provides Chaudhuri with her most striking chapter. The sachets are so cheap, they undercut bulk purchases; so tiny, no recycler can ever make a penny gathering them; and so smeared with product, no recycling process could ever handle them.

Chaudhuri’s general conclusions are solid, but it’s engaging business anecdotes like this one that truly terrify.

Machine men with machine minds

Reading The Ideological Brain by Leor Zmigrod for the Telegraph, 9 March 2025

At the end of his 1940 film The Great Dictator, Charlie Chaplin, dressed in Adolf Hitler’s motley, looked directly at the camera and declared war on the “machine men with machine minds” who were, even then, marching roughshod across his world. Nor have they vanished. In fact, if you believe Leor Zmigrod, you may, without knowing it, be an inflexible, dogmatic robot yourself.

Zmigrod, a young Cambridge neuroscientist, studies “cognitive rigidity”. What does that look like in the real world? Jimmy McGill, hero of the TV series Better Call Saul, can give us a quick handle on one of its manifestations: “The fallacy of sunk costs. It’s what gamblers do. They throw good money after bad thinking they can turn their luck around. It’s like, ‘I’ve already spent this much money, or time, whatever. I got to keep going!’” And as Jimmy adds: “There’s no reward at the end of this game.”

Zmigrod’s ambition has been to revive the project of 18th-century French nobleman Antoine Destutt de Tracy, whose science of “ideologie” sought a rigorous method for discovering when ideas are faulty and unreliable. (This was my first warning signal. If a man in a white coat told me that my ideas were “objectively unreliable”, I would snatch up the biggest stick I could find. But let’s run with the idea.)

A happier lodestar for Zmigrod’s effort is Else Frenkel-Brunswik, an Austrian refugee in America who spent the 1950s testing children to see whether or not she could predict and treat “machine minds”. Frenkel-Brunswik found it easy to distinguish between the prejudiced and unprejudiced, the xenophobic and the liberal child. She found that an upbringing steeped in obedience and conformity not only fostered authoritarianism; it made children cognitively less flexible, so that they had a hard time dealing with ordinary, everyday ambiguities. Even sorting colours left them vexed and unhappy.

Zmigrod argues that the dogmatic mind’s information processing style isn’t restricted to dealing with ideological information. It’s “a more generalised cognitive impairment that manifests when the dogmatic individual evaluates any information, even at the speed of under a second.” Cognitive rigidity thus goes hand-in-hand with ideological rigidity. “This may seem obvious to some,” Zmigrod writes. “A rigid person is a rigid person. But in fact, these patterns are not obvious.”

But they are, and that’s the problem. Nearly 200 pages into The Ideological Mind, they’re even more obvious than they were when Zmigrod first said they weren’t. She reveals, with no little flourish, that in one of her studies, “obedient actions evoked neural activity patterns that were markedly different from [those produced by] free choices.” Well, of course. If the mind can differentiate between free action and obedience, and a mind is a property of a brain at work, then a good enough scanner is bound to be able to spot “differences in neural activity” at such times. Again and again, Zmigrod repackages news about improved brain-scanning techniques as revelations about the mind. It’s the oldest trick in the neuroscientific writer’s book.

Zmigrod began work on The Ideological Brain as Donald Trump became US president for the first time and Britain voted to leave the EU. Her prose is accomplished, and she takes us on an entertaining ramble past everything from demagoguery to dopamine uptake, but the fresh alarm of those days bleeds too consistently through her views. She’s unremittingly hostile towards belief systems of all kinds; when she writes of a 2016 study she conducted that it showed “the extreme Right and the extreme Left were cognitively similar to each other”, I wondered whether proving this truism in the lab really contributed to an understanding of the actual merits, or otherwise, of such ideologies. Zmigrod seems, sometimes, to mistrust belief per se. In her last chapter, we get her idea of the good life: “No pressure, no predestination, no ancestors on your shoulders or rituals to obey, no expectations weighing you down or obstructing your movement.” I had to read this several times before I could believe it. So her alternative to dogmatism is, what, Forrest Gump?

Zmigrod’s arguments assume that people are a unity. They’re not. The most thoughtful and open-minded people in argument I ever met were members of MAGA militias. She assumes the workings of mind can be read off a scanner. I’m not at all against hard materialism as an idea, but imagine explaining the workings of a computer chip by describing the icons on a computer screen: that’s how Zmigord describes thought.

Besides, the evident fact that brains age, and minds with them, is never a factor in Zmigord’s argument. Variations in cognitive flexibility can be easily explained by the aging process, without any recourse to machines that go “bleep”. Is it so unreasonable to expect a young mind to be more liberal and open to exploration, and an old mind to be more conservative, more dedicated to wringing value from what it already knows? Few of us want to be old before our time; few want to be a 90-year-old adolescent.

“Repeating rules and rituals, rules and rituals, has stultifying effects on our minds,” Zmigrod insists. “With every reiteration and rote performance, the neural pathways underpinning our habits strengthen, whereas alternative mental associations – more original yet less frequently rehearsed – tend to decay.” I see this as a picture of learning; Zmigrod sees a slippery slope ending in extremism. You can look at the world that way if you want, but I can’t see that it’ll get you far.

Where millipedes grow more than six feet long

Reading Riley Black’s When the Earth Was Green for New Scientist, 26 February 2025

Plants are boring. Their behaviours are invisible to the naked eye, they operate on timescales our imaginations cannot entertain (however much we strain them), and they run roughshod over familiar categories of self, other and community.

Wandering among (or is it through?) a 14,000-year old aspen clone (or should that be “a stand of aspen trees?”), palaeontologist Riley Black wonders, “how many living things have alighted on, chewed up, dwelled within, pushed over, and otherwise had a brush with a tree so enduring it probably understands the nature of time better than I ever will?”

When the Earth Was Green is a paean to plants. It’s a series of vignettes, each seperated from its neighbours by gaps of millions, tens of millions, sometimes hundreds of millions of years. It’s an account of how vegetable and animal life co-evolved. It’s not as immediately startling as Black’s last book, 2022’s The Last Days of the Dinosaurs, but it’s a worthy successor: as I wrote of Last Days, “this is palaeontology written with the immediacy of natural history”.

If you winced just now at the twee idea of a tree “understanding time”, you may want to hurry past Black’s last chapter — a virtue-signalling hymnal to the queerness of trees. This crabbed reviewer comes across many such passages, and reckons they’re getting increasingly formulaic. Black, seemingly unaware of the irony, pokes gentle fun at an earlier rhetoric that imagined, say, tideline plants “colonising” and “invading” the land. Maybe all writers who attempt to engage with plants suffer this fate: the rhetorical tools they stretch for will date far faster than their science.

Riley excels at conveying life’s precarity. Life does not “recover” or “regenerate” after extinction events. It reinvents itself. Early on — 425 million years ago, to be exact — we find life flourishing in strange lands, under skies so short of oxygen, fires can only smoulder and dead plants cannot decompose. When oxygen levels rise, existing insect species grow gigantic in a desperate (and, ultimately, losing) battle to elude its toxic effects. When an asteroid brings the Cretaceous Period to a fiery end, 66 million years ago, we find surviving plant species innovating unexpected relationships with their surviving pollinators. 15,000 years ago the planet grew so verdant, some plant species could afford to abandon photosynthesis entirely, and simply parasitise their neighbours.

Adaptation is a two-edged sword in such a changeable world. It allows you to take full advantage of today’s ecosystems, but how will you cope with tomorrow’s? Remaining unspecialised has allowed the Ginkgo tree to survive the world’s worst catastrophe and persist for millions and millions of years.

Black allows her imagination full rein. Wandering through a dense, warm, humid, million-year-old forest in Ohio, where “millipedes grow more than six feet long and alligator-size amphibians silently watch the shoreline for unwary insects,” the reader may wonder where the science stops and the speculation begins. Riley’s extensive endnotes explain the limits of our current knowledge and the logic behind her rare fancies. These passages are integral to the text and include some of her most insightful writing.

Above all, this is a book about how animals and plants shape each other. When animals large enough to knock over trees disappeared, forests grew more dense, with a continuous overstory that gave even large animals a third dimension to explore. Thick forests forced surviving mammals and surviving dinosaurs into novel shapes and, even more important, novel behaviours. Both classes learned to spend more time with their young. And, if we’re prepared to cherry-pick our mammalian examples, we can just about say that both learned to fly.

When the Earth Was Green may be too cutesy for some. The sight of a couple of sabercats rolling about in a patch of catnip will either enchant you or, well, it won’t. And I still think plants are boring. I’d happily pulp the lot of them to make books as fascinating as this one.

“This is the story of Donald Trump’s life”

Reading The Sirens’ Call by Chris Hayes for the Telegraph, 15 February 2025

It seems to me, and might seem to you, as though headlines have always ticked across the bottom of our TV screens during news broadcasts. Strange, how quickly technological innovations lose their novelty. In fact, this one is only 23-and-a-half years old: the “ticker” was reserved for sports scores until the day in 2001 when two hijacked passenger jets were flown into New York’s World Trade Center. Fox News gave its ticker over to the news service that day, and MSNBC and CNN quickly followed. Cable channels, you might say, quickly and seamlessly went from addressing their viewers’ anxieties to stoking them.

That’s Chris Hayes’s view, and he should know: the political commentator and TV news anchor hosts a weekday current affairs show on MSNBC. The Sirens’ Call, his new book, is first of all an insider’s take on the persuasion game. Hayes is a hard worker, and a bit of a showman. When he started, he imagined his regular TV appearances would bring him some acclaim. “And so,” he writes, “the Star seeks recognition and gets, instead, attention.” This experience is now common. Thanks to the black mirrors in our pockets, we’re now all stars of our own reality TV show.

To explain how he and the rest of smartphone-wielding humanity ended up in this peculiar pickle – “akin to life in a failed state, a society that had some governing regime that has disintegrated and fallen into a kind of attentional warlordism” – Hayes sketches out three kinds of attention. There’s the conscious attention we bring to something: to a book, say, or a film, or a painting. Then there’s the involuntary attention we pay to environmental novelties (a passing wasp, a sudden breeze, an unexpected puddle). The more vigilant we are, the more easily even minor stimuli can snare our attention.

This second kind is the governing principle of advertising, an industry that over the last two decades has metastasised into something vast and insidious: call it “the attention economy”. Everything is an advertisement now, especially the news. The ticker and its evolved cousins, the infinitely-scrolling feed (think X) and the autoplaying video-stream (think TikTok) exist to maintain your hypervigilance. You can, like Hayes, write a book so engaging that it earns the user’s conscious focus over several hours. If you want to make money, though – with due respect to Scribe’s sales department – you’re better off snaring the user’s involuntary attention over and over again with a procession of conspiracy theories and cat videos.

The third form of attention in Hayes’s typology is social attention: that capacity for involuntary attention that we reserve for events relating specifically to ourselves. Psychologists dub this the “cocktail-party effect”, from our unerring ability to catch the sound of our own name uttered from across a crowded and noisy room. Social attention is extraordinarily pregnant with meaning. Indeed, without a steady diet of social attention, we suffer both mentally and physically. Why do we post anything on social media? Because we want others to see us. “But,” says Hayes, “there’s a catch… we want to be recognised as human by another human, as a subject by another subject, in order for it to truly be recognition. But We Who Post can never quite achieve that.”

In feeding ourselves with the social attention of strangers, we have been creating synthetic versions of our most fundamental desire, and perfecting machines for the manufacture of empty calories. “This is the story of Donald Trump’s life,” Hayes explains, by way of example: “wanting recognition, instead getting attention, and then becoming addicted to attention itself, because he can’t quite understand the difference, even though deep in his psyche there’s a howling vortex that fame can never fill.” Elon Musk gets even harsher treatment. “What does the world’s richest man want that he cannot have?” Hayes wonders. “What will he pay the biggest premium for? He can buy whatever he desires. There is no luxury past his grasp.” The answer, as Musk’s financially disastrous purchase of Twitter demonstrates all too clearly, and “to a pathological degree, with an unsteady obsessiveness that’s thrown his fortune into question, is recognition. He wants to be recognised, to be seen in a deep and human sense. Musk spent $44 billion to buy himself what poor pathetic Willy Loman [in Arthur Miller’s play Death of a Salesman] couldn’t have. Yet it can’t be purchased at any sum.”

We’re not short of books about how our digital helpmates are ushering in the End of Days. German psychologist Gerd Gigerenzer’s How to Stay Smart in a Smart World (2022) gets under the hood of systems that ape human wisdom just well enough to disarm us, but not nearly well enough to deliver happiness or social justice.The US social psychologist Jonathan Haidt took some flak for over-egging his arguments in The Anxious Generation (2024), but the studies he cites are solid enough and their statistics amount to a litany of depression, self-harm and suicide among young (and predominantly female) users of social media. In Unwired (2023), Gaia Bernstein, a law professor at Seton Hall University in New Jersey, explains how we can (and should) sue GAMA (Google, Amazon, Meta, Apple) for our children’s lost childhood.

Among a crowded field, Hayes singles out Johann Hari’s 2022 book Stolen Focus for praise, though this doesn’t reflect well on Hayes himself, whose solutions to our digital predicament are weak beer compared to Hari’s. Hari, like Gigerenzer and Bernstein, had bold ideas about civil resistence. He used his final pages to construct a bare-bones social protest movement.

Hayes, by contrast, “fervently hopes” that the markets will somehow self-correct, so that newspapers, in particular, will win back their market share, ushering in an analogue, pre–attention age means of directing attention in place of the current attention-age version. “I think (and fervently hope) we will see increasing growth in businesses, technologies, and models of consumption that seek to evade or upend the punishing and exhausting reality of the endless attention commodification we’re living through,” Hayes says. But what evidence has he, that such a surprising reversal in our cultural fortunes is imminent? The spread of farmers’ markets in US cities and the resurgence of vinyl in record stores. I’d love to believe him, but if I were an investor I’d show him the door.

With so many other writers making analogous points with a near-revolutionary force, The Siren’s Call says more about Hayes than it does about our crisis. He’s the very picture of an intelligent, engaged liberal, and I came away admiring him. I also worried that history will be no kinder to his type than it was to the Russian liberals of 1917.

 

Anything but a safe bet

Reading The Gambling Animal: Humanity’s evolutionary winning streak—and how we risk it all by Glenn Harrison and Don Ross. For New Scientist, 29 January 2025

Insights into animal evolution used to come from studying a creature’s evolutionary relationships to its closest relatives. To lampoon the idea slightly: we once saw human beings as a kind of chimp.

Our perspectives have widened: looking across entire ecosystems, we begin to see what drives animals who share the same environment toward similar survival solutions. This is convergent evolution — the process by which, say, if you’re a vertebrate living in an aquaeous medium, you’re almost certainly going to end up looking like a fish.

Economists Glenn Harrison and Don Ross look at this process from an even further remove: they study evolution in terms of risks to a species’ survival, and trace the ways animals evolve to mitigate those risks. From this distance, it makes more sense to talk about communities and societies, than about individuals.

We used to understand social behaviour as the expression of intelligence, and that intelligence was rather simplistically conceived. Social animals thought at least a little bit “like us”. Of course this was never more than hand-waving in the absence of good data. Now Harrison and Ross arrive with good news from their research station amid the grasslands of South Africa: they’ve worked out how elephants think, why they never forget (the old saw is true), and why Pleistocene elephants and humans both acquired such huge and peculiar brains. their encephalisation suggests they co-evolved a neurological solution to the climate’s growing unpredictability. Faced with a landscape that was rapidly drying out, they both learned how to gamble on the likely location of future resources.

But while humans developed an overgrown frontal cortex, and learned to imagine, elephants overgrew their cerebellum. and learned to remember. For most of evolutionary history, the elephants were more successful than the hominins. Only recently has our borderline-delusional thinking allowed us to outcompete the once ubiquitous elephant.

Harrison and Ross are out to write a dense, complex, closely argued exposition of their risk-and-reward experiments with humans and elephants, and to discuss the evolutionary implications of this work. They are not writing a work of literature. It may take a chapter or two for the casual reader to settle to their meticulous style. Treats lie in store for those who stay patient. Not the least of them is a mischievously conceived “science fiction”, laying out exactly what elephant scientists in some wildly alternate Earth might make of those desperately challenged and almost-extinct humans, struggling out there in the veldt. The point is not merely to have fun (although the authors’ intellectual exuberance is clear); the authors are out to describe the workings of a complex but fundamentally non-human intelligence: a mind that weighs probabilities far more easily than it dreams up might-bes and nice-to-haves.

How does a mind that can’t remember more than seven numbers for more than five minutes still arrive at a decent scientific understanding of the world? The authors cheerfully admit that, having worked for so long with elephants, they find humans ever more baffling.

Tracing the way human societies evolved to manage risk, from the savannah to Wall Street, the authors note that while human individuals are mildly risk-averse, they innovate behavioral norms — and from those norms, institutions — that collectivise risk with astonishing effectiveness. The (possibly terminal) flowering of this ability may be the the concept of limited liability, pioneered in New York State in 1811, which has turbocharged the species’ runaway growth “across multiple dimensions, particularly of population and per-capita wealth.” However much you and I might fear the future, the institutions we have built are free to take the most horrible chances — not least, in recent decades, with the climate.

Human-style thnking is an unbelievably high-risk strategy that has paid off only because humans have enjoyed a quite incredible evolutionary winning streak. But past performance is no guarantee of future returns, and the authors are far from optimistic about our prospects: “The history of humans,” they suggest, “is not a record of safe bets.”

“Starvation… starvation… starvation… died at front…”

Reading The Forbidden Garden: The Botanists of Besieged Leningrad and Their Impossible Choice by Simon Parkin. For Nature, 14 January 2025

Past Simon Parkin’s account of the siege of Leningrad, and the fate there of the world’s first proper seed bank, past his postscript and his afterword, there are eight pages which — for people who know this story already — will be worth the rest of the book combined.

It’s the staff roll-call, meticulously assembled from Institute records and other sources, of what Parkin calls simply the Plant Institute. That is, more fully, the Leningrad hub of the Bureau of Applied Botany, the Vsesoyuzny Institut Rastenievodstva, founded in the nineteenth century by German horticulturalist and botanist Eduard August von Regel and vastly expanded by Russian Soviet agronomist Nikolai Vavilov.

It does not make for easy reading.

“Starvation… starvation… starvation… died at front…” Between 8 September 1941 and 27 January 1944, while German forces besieged the city, the staff of the Institute in St Isaac’s Square sacrificed themselves, one by one, to protect a collection whose whole raison d’être<OK?Good! SI] was to one day save humanity from starvation.

While, just around the corner, Leningrad’s Hermitage art museum’s two million artefacts were squirreled away for safety, the Plant Institute faced problems of a different order. Its 2,500 species — hundreds of thousands of seeds, rhizomes and tubers — were alive and needed to be kept a degree or two above freezing. And among those, 380,000 examples of potato, rye and other crops would only survive if planted annually. This in a city that was being shelled for up to eighteen hours at a time and where the temperature could — and in February 1942, did — fall to around -40degC.

Iogan Eikhfeld, the institute’s director following Vavilov’s disappearance (his arrest and secret imprisonment, in fact), was evacuated to the town of Krasnoufimsk in the Ural mountains. A train containing a large part of the collection was to follow<OK? Yes SI], but never made it. Eikhfeld eventually got word to the Institute, begging his staff to eat the collection and save themselves. But they had lost the collection to hunger once before, in the dreadful winter of 1921-1922; they weren’t going to again.

January and February 1942 were the worst months. In the dark, freezing building of the Institute, workers prepared seeds for long-term preservation. They divided the collection into several duplicate parts, while bombs burst around them.

The Germans never did succeed in overrunning Leningrad. The rats did. That first winter, hordes of vermin swarmed the building. No effort to protect the collection proved rat-proof: they’d break into the ventilated metal boxes to devour the seeds. Still, of the Institute’s quarter of a million accessions, only 40,000 were consumed by vermin or failed to germinate.

The collection survived, after a fashion. The plantsman and Stalinist poster-child Trofim Lysenko — Vavilov’s inveterate opponent — maintained that the whole enterprise was disordered and for a long time, until the 1970s, it was allowed to deteriorate.

Contributions from abroad helped sustain it. It once received potatoes from the Tucaman University in Argentina, thanks to a chance meeting between its director Peter Zhukovsky and a German plant collector, Heinz Brücher. [It turned out that Brücher had been an officer in the SS Nazi paramilitary group in the late 1990s, Slightly garbled here. “In the 1990s it emerged that during the war, Brücher had been an officer” etc. etc.] leading a special commando unit charged with raiding Soviet agricultural experimental stations. So Brücher hadn’t really been donating valuable varieties of potato after all: he had been returning them.

The fortunes of war
The Forbidden Garden of Leningrad is a generous and desperately sad account of human generosity and sacrifice. If it falls short anywhere, it’s at exactly the place Parkin himself identifies. In this city laid to waste, among the bodies of the fallen, the frozen — in some hideous cases, the half-eaten — starving people make for rotten witnesses of their own condition. The author only had scraps to go on <OK? Good! SI].

And, you can’t research and produce at the pace Parkin does without some loss of finesse; his last book, The Island of Extraordinary Captives, about the plight of foreign nationals interned by the British on the Isle of Man, only came out in 2022. Parkin can tend to turn incidental details into emblems of things he hasn’t got time to discuss. The passing mention that Vavilov’s calloused hands are “an intimate sign of his deep and enduring connection to the earth”, for example, leaves the reader wanting more <OK? Good SI].

Sensation will carry your account so far, and Parkin’s horrors are few and carefully chosen. “Some ate joiner’s glue,” he writes, “made from the bones and hooves of slaughtered animals, just about edible when boiled with bay leaves and mixed with vinegar and mustard.” A nurse is arrested “on suspicion of scavenging amputated limbs from the operating room”. At the Institute, biochemist Nikolai Rodionovich Ivanov prepares some raw-hide harnesses, “cut into tagliatelle-like strips and boiled for eight hours”, for a dinner party.

But hunger hollows out more than the belly. Soon enough, it hollows out the personality. In the relatively few interviews Parkin was able to source, he tells us, survivors from the Institute “spoke in broadly emotionless terms of how the moral, mortal dilemma they faced was, in fact, no dilemma at all”. Their argument was that, in the end, purpose sustained them better than a few extra calories. Vadim Stepanovich Lekhnovich, curator of the tuber collection, can speak for all here: “It was impossible to eat up [the collection], for what was involved was the cause of your life, the cause of your comrades’ lives.”

Parkin applies skill and intelligence to the (rather thankless) business of recasting familiar stories in a fresh light and has a reputation for winkling out obscure but important episodes of wartime history. It is reasonable, then, that he should cut to the chase and condense the science. Two 2008 books on Vavilov’s arrest amidst scientific disagreements with Lysenko do a better job on that front <OK? Good SI]: Peter Pringle’s The Murder of Nikolai Vavilov and Gary Paul Nabhan’s brilliant though boringly titled Where Our Food Comes From. For example, Parkin dubs Lysenko’s theories of developmental plasticity an “outlier theory”, even though it wasn’t. Vavilov had wanted translated into English an Institute report that contained a surprisingly positive chapter about Lysenko’s ideas.

Parkin does get the complicated relationship between the two agronomists <OK? Yes SI], though. What perhaps caused the most friction between the two biologists was Lysenko’s ineptitude as an experimentalist. Parkin, to his credit, nails the human and political context with a few adept and well-timed asides.

And he broadens his account to depict what, to a modern audience is a very strange world indeed — a pre-‘green revolution’ world in which even the richest nations lived under the threat of starvation, even in times of peace; and a world which, when it went to war, wielded famine as a weapon.

The Forbidden Garden of Leningrad is a greatly enjoyable book. Parkin’s chief accomplishment, though, has been to unshackle an important story from its many and complex ties to botany, genetics and developmental science, and lend it a real edge of human desperation.

I’d sooner gamble

Speculation around the 2024 US election prompted this article for the Telegraph, about the dark arts of prediction

On July 21, the day Joe Biden stood down, I bet £50 that Gretchen Whitmer, the governor of Michigan, would end up winning the 2024 US presidential election. My wife remembers Whitmer from their student days, and reckons she’s a star in the making. My £50 would have earned me £2500 had she actually stood for president and won. But history makes fools of us all, and my bet bought me barely a day of that warm, Walter-Mittyish feeling that comes when you stake a claim in other people’s business.

The polls this election cycle indicated a tight race – underestimating Trump’s reach. But cast your mind back to 2016, when the professional pollster Nate Silver said Donald Trump stood a 29 per cent chance of winning the US presidency. The betting market, on the eve of that election, put Trump on an even lower 18 per cent chance. Gamblers eyed up the difference, took a punt, and did very well. And everyone else called Silver an idiot for not spotting Trump’s eventual win.

Their mistake was to think that Silver was a fortune-teller.

Divination is a 6,000-year-old practice that promises to sate our human hunger for certainty. On the other hand, gambling on future events – as the commercial operation we know today – began only a few hundred years ago in the casinos of Italy. Gambling promises nothing, and it only really works if you understand the mathematics.

The assumption that the world is inherently unpredictable – so that every action has an upside and a downside – got its first formal expression in Jacob Bernoulli’s 1713 treatise Ars Conjectandi (“The Art of Conjecturing”), and many of us still can’t wrap our heads around it. We’d sooner embrace certainties, however specious, than take risks, however measurable.
We’re risk-averse by nature, because the answer to the question “Well, what’s the worst that could happen?” has, over the course of evolution, been very bad indeed. You could fall. You could be bitten. You could have your arm ripped off. (Surprise a cat with a cucumber and it’ll jump out of its skin, because it’s still afraid of the snakes that stalked its ancestors.)

Thousands of years ago, you might have thrown dice to see who buys the next round, but you’d head to the Oracle to learn about events that could really change your life. A forthcoming exhibition at the Bodleian Library in Oxford, Oracles, Omens and Answers, takes a historical look at our attempts to divine the future. You might assume those Chinese oracle bones are curios from a distant and more innocent time – except that, turning a corner, you come across a book by Joan Quigley, who was in-house astrologer to US president Ronald Reagan. Our relationship to the future hasn’t changed very much, after all. (Nancy Reagan reached out to Quigley after a would-be assassin’s bullet tore through her husband’s lung. What crutch would I reach for, I wonder, at a moment like that?)

The problem with divination is that it doesn’t work. It’s patently falsifiable. But this wasn’t always the case. In a world radically simpler than our own, there are fewer things that can happen, and more likelihood of one of them happening in accordance with a prediction. This turned omens into powerful political weapons. No wonder, then, that in 11 AD, Augustus banned predictions pertaining to the date of someone’s death, while at the same time the Roman emperor made his own horoscope public. At a stroke, he turned astrology from an existential threat into a branch of his own PR machine.

The Bamoun state of western Cameroon had an even surer method for governing divination – in effect until the early 20th century. If you asked a diviner whether someone important would live or die, and the diviner said they’d live, but actually they died, then they’d put you, rather than the diviner, to death.

It used to be that you could throw a sheep’s shoulder blade on the flames and tell the future from the cracks that the fire made in the bone. Now that life is more complicated, anything but the most complicated forms of divination seems fatuous.

The daddy of them all is astrology: “the ancient world’s most ambitious applied mathematics problem”, according to the science historian Alexander Boxer. There’s a passage in Boxer’s book A Scheme of Heaven describing how a particularly fine observation, made by Hipparchus in 130 BC, depended on his going back over records that must have been many hundreds of years old. Astronomical diaries from the Assyrian library at Nineveh stretch from 652BC to 61BC, making them (as far as we know) the longest continuous research project ever undertaken.

You don’t go to that amount of effort pursuing claims that are clearly false. You do it in pursuit of cosmological regularities that, if you could only isolate them, would bring order and peace to your world. Today’s evangelists for artificial intelligence should take note of Boxer, who writes: “Those of us who are enthusiastic about the promise of numerical data to unlock the secrets of ourselves and our world would do well simply to acknowledge that others have come this way before.”

Astrology has proved adaptable. Classical astrology assumed that we lived in a deterministic world – one in which all events are causally decided by preceding events. You can trace the first cracks in this fixed view of the world all the way back to the medieval Christian church and its pesky insistence on free will (without which one cannot sin).

In spite of powerful Church opposition, astrology clung on in its old form until the Black Death, when its conspicuous failure to predict the death of a third of Europe called time on (nearly) everyone’s credulity. All of a sudden, and with what fleet footwork one can only imagine, horoscopists decided that your fate depended, not just upon your birth date, but also upon when you visited the horoscopist. This muddied the waters wonderfully, and made today’s playful, me-friendly astrologers – particularly popular on TikTok – possible.

***

The problem with trying to relate events to the movement of the planets is not that you won’t find any correlations. The problem is that there are correlations everywhere you look.
And these days, of course, we don’t even have to look: modern machine-learning algorithms are correlation monsters; they can make pretty much any signal correlate with any other. In their recent book AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor spend a good many pages dissecting the promise of predictive artificial intelligence (for instance, statistical software that claims to identify crimes before they have happened). If it fails, it will fail for exactly the same reasons astrology fails – because it’s churning through an ultimately meaningless data set. The authors conclude that immediate dangers from AI “largely stem from… our desperate and untutored keenness for prediction.”

The promise of such mechanical prediction is essentially astrological. We absolutely can use it to predict the future, but only if the world turns out, underneath all that roiling complexity, to be deterministic.

There are some areas in which our predictive powers have improved. The European Centre for Medium-Range Weather Forecasts opened in Reading in 1979. It was able to see three days into the future. Six years later, it could see five days ahead. In 2012 it could see eight days ahead and predicted Hurricane Sandy. By next year it expects to be able to predict high-impact events a fortnight before they happen.

Drunk on achievements in understanding atmospheric physics, some enthusiasts expect to predict human weather using much the same methods. They’re encouraged by numerical analyses that throw up glancing insights into corners of human behaviour. Purchasing trends can predict the ebb and flow of conflict because everyone rushes out to buy supplies in advance of the bombings. Trading algorithms predicted the post-Covid recovery of financial markets weeks before it happened.

Nonetheless, it is a classic error to mistake reality for the analogy you just used to describe it. Political weather is not remotely the same as weather. Still, the dream persists among statistics-savvy self-styled “superforecasters”, who regularly peddle ideas such as “mirror worlds” and “policy flight simulators”, to help us navigate the future of complex economic and social systems.

The danger with such prophecies is not that they are wrong; rather, the danger lies in the power to actually make them come true. Take election polling. Calling the election before it happens heartens leaders, disheartens laggards, and encourages everyone to alter their campaigns to address the anxieties and fears of the moment. Indeed, the easiest, most sure-fire way of predicting the future is to get an iron grip on the present – something the Soviets knew all too well. Then the future becomes, quite literally, what you make it.

There are other dangers, as we increasingly trust predictive technology with our lives. For instance, GPS uses a predictive algorithm in combination with satellite signals to plot our trajectory. And in December last year, a driver followed his satnav through Essex, down a little lane in Great Dunmow called Flitch Way, and straight into the River Chelmer.

We should not assume, just because the oracle is mechanical, that it’s infallible. There’s a story Isaac Asimov wrote in 1955 called Franchise, about a computer that, by chugging through the buzzing confusion of the world, can pinpoint the one individual whose galvanic skin response to random questions reveals which political candidate would be (and therefore is) the winner in any given election.

Because he wants to talk about correlation, computation, and big data, Asimov skates over the obvious point here – that a system like that can never know if it’s broken. And if that’s what certainty looks like, well, I’d sooner gamble.