Engineers of Human Souls

Featured

11 January 2024, from Bridge Street Press…

 

ENGINEERS OF HUMAN SOULS is an intimate and shocking group portrait of four novelists whose political ambitions shaped a century.

Maurice Barrès, who first wielded the politics of identity. Gabriele D’Annunzio, whose poetry became a blueprint for fascism. Maxim Gorky, dramatist of the working class and Stalin’s cheerleader. The Maoist Ding Ling, whose stories exculpated the regime that kept her imprisoned.

Each writer nursed an extravagant vision of the future. All four were lured to the centre of political action, where they created the blueprints and practices that sustained notorious regimes.

These stories –- of courage and compromise, vanity and malevolence – speak urgently to the uncontrollable power of words.

“A failure by the British state”

Reading The Poison Line by Cara McGoogan for the Telegraph, 17 September 2023

Mayor Treloar College, founded in 1907 for the education and care of physically disabled children, was more than just a school for Ade Goodyear. The teaching and medical staff were more like an extended family. Dr Anthony Aronstam, director of the Treloar’s haemophilia centre, used to invite Ade and his schoolfriends over to his house where they drank lemonade and swam in the pool.

One afternoon in the summer of 1984 Ade found Aronstam bent over his desk, trembling. “‘We’ve fucked up,’ Aronstam said. ‘We’ve messed up, boys. I’ve messed up. It’s all gone wrong.’”

In the face of a gathering global calamity, Aronstam had been assessing Ade, without his knowledge, for signs of AIDS. Two of Ade’s schoolfriends were already diagnosed. One, Richard Campbell, had already died. By 1986 Aronstam had forty­-three patients who were HIV positive. He wrote in a report, “”There are gloomier predictions about, which suggest that up to 100 per cent of the infected haemo­philiac population will eventually succumb to the virus.”

The Poison Line is the first book by journalist Cara McGoogan. It began life as a couple of features written for this paper in the opening week of the Infected Blood Inquiry in 2019.
It may seem thin praise to single out the way McGoogan has arranged her material here, but truly the effort has been superhuman. This is the story of a global medical scandal, implicating health services, pharmaceutical companies and whole governments, and unfolding slowly enough, and meeting obstacles enough, that many of its victims died before they ever saw justice, never mind compensation. It is told, for the most part, through the recollections of the victims, their families, their doctors, their legal and political representatives. That so many individual stories here burn their way into the reader’s skull is testament to the strength of the source material, of course, but there were so many plates McGoogan could have dropped here and didn’t, so many stories to leave hanging and implications to leave unexplored, that there ought to be some sort of award for literary juggling established in her name.

Treloar College is just the most familiar domestic emblem of a crisis that played out across the US, UK, mainland Europe, and south-east Asia. It began when a new, much quicker, more convenient and more comfortable way was found of administering blood clotting factors to haemophiliacs. Factor VIII, a freeze-dried powder derived from blood, was infected with hepatitis B, but since this infection was common among haemophiliacs anyway, and went away in time, the issue was ignored. Consequently, other agents infecting Factor VIII went undetected, including HIV and hepatitis C.

Institution after institution doubled down on their original error in allowing and promoting a tainted product. In the UK, ministers themselves come out of this account surprisingly well, as McGoogan traces their appalled investigations into decades of deliberate cover-up. It was left to Jeremy Hunt, “the epitome of the establishment politician”, to sum up the disaster as “a failure by the British state. I don’t think there’s any other way to describe it.

The second half of Poison Line, about the victims’ courtroom battles, reveals the economic drivers of the scandal. By the 1990s plasma was more valuable than gold and oil. Most Factor VIII was produced in the US, and American blood bankers, who are allowed to pay donors for plasma, were gathering blood from wherever they could: outside nightclubs, from inside prisons, and from a centre in Nicaragua nicknamed the ‘House of Vampires’, which collected plasma from up to a thousand people a day. The there was the way Factor VIII was made: any one injection could contain the blood of twenty-​­five thousand people.

As McGoogan’s account gathers pace and scale, the more existential the issues become. At what point does a corporation countenance the death of its customers? In what institutional setting will a doctor think it reasonable to tell an AIDS-infected mother that “Women like you should be sterilized”? What level of conformism does it take for the mother of a seventeen year old, infected with HIV from a haemophilia treatment, to tell him that he’s brought shame on her, and throw him out the house?

By the closing pages, we seem to have left the news pages behind entirely, and be wrestling with something that looks very like the tragedy of the human condition.

New flavours of intelligence

Reading about AI and governance for New Scientist, 13 September 2023

A sorcerer’s apprentice decides to use magic to help clean his master’s castle. The broom he enchants works well, dousing the floors with pails full of water. When the work is finished, the apprentice tries to stop the broom. Then, he tries to smash the broom. But the broom simply splits and regrows, working twice as hard as before, four times as hard, eight times as hard… until the rooms are awash and the apprentice all but drowns.

I wonder if Johann Wolfgang von Goethe’s 1797 poem sprang to mind as Mustafa Suleyman (co-founder of AI pioneers DeepMind, now CEO of Inflection AI) composed his new book, The Coming Wave? Or perhaps the shade of Robert Oppenheimer darkened Suleyman’s descriptions of artificial intelligence, and his own not insignificant role in its rise? “Decades after their invention,” he muses, “the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident.”

Suleyman and his peers, having launched artificially intelligent systems upon the world, are right to tremble. At one point Suleyman compares AI to “an evolutionary burst like the Cambrian explosion, the most intense eruption of new species in the Earth’s history.”

The Coming Wave is mostly about the destabilising effects of new technologies. It describes a wildly asymmetric world where a single quantum computer can render the world’s entire encryption infrastructure redundant, and an AI mapping new drugs can be repurposed to look for toxins at the press of a return key.

Extreme futures beckon: would you prefer subjection under an authoritarian surveillance state, or radical self-reliance in a world where “an array of assistants… when asked to create a school, a hospital, or an army, can make it happen in a realistic timeframe”?

The predatory city states dominating this latter, neo-Renaissance future may seem attractive to some. Suleyman is not so sure: “Renaissance would be great,” he writes; “unceasing war with tomorrow’s military technology, not so much.”

A third future possibility is infocalypse, “where the information ecosystem grounding knowledge, trust, and social cohesion… falls apart.”

We’ll come back to this.

As we navigate between these futures, we should stay focused on current challenges. “I’ve gone to countless meetings trying to raise questions about synthetic media and misinformation, or privacy, or lethal autonomous weapons,” Suleyman complains, “and instead spent the time answering esoteric questions from otherwise intelligent people about consciousness, the Singularity, and other matters irrelevant to our world right now.”

Historian David Runciman makes an analogous point in The Handover, an impressive (and impressively concise) history of the limited liability company and the modern nation state. The emergence of both “artificial agents” at the end of the 18th Century was, Runciman argues, “the first Singularity”, when we tied our individual fates to two distinct but compatible autonomous computational systems.

“These bodies and institutions have a lot more in common with robots than we might think,” argues Runciman. Our political systems are already radically artificial and autonomous, and if we fail to appreciate this, we won’t understand what to do, or what to fear, when they acquire new flavours of intelligence.

Long-lived, sustainable, dynamic states — ones with a healthy balance between political power and civil society — won’t keel over under the onslaught of helpful AI, Runciman predicts. They’ll embrace it, and grow increasingly automated and disconnected from human affairs. How will we ever escape this burgeoning machine utopia?

Well, human freedom may still be a force to reckon with, according to Igor Tulchinsky. Writing with Christopher Mason in The Age of Prediction, Tulchinsky explores why the more predictable world ushered in by AI may not necessarily turn out to be a safer one. Humans evolved to take risks, and weird incentives emerge whenever predictability increases and risk appears to decline.

Tulchinsky, a quant who analyzes the data flows in financial markets, and Mason, a geneticist who maps dynamics across human and microbial genomes, make odd bedfellows. Mason, reasonably enough, welcomes any advance that makes medicine more reliable. Tulchinsky fears lest perfect prediction in the markets renders humans as docile and demoralised as cattle. The authors’ spirited dialogue illuminates their detailed survey of what predictive technologies actually do, in theatres from warfare to recruitment, policing to politics.

Let’s say Tulchinsky and Mason are right, and that individual free will survives governance by all-seeing machines. It does not follow at all that human societies will survive their paternalistic attentions.

This was the unexpected sting in the tail delivered by Edward Geist in Deterrence under Uncertainty, a heavyweight but unexpectedly gripping examination of AI’s role in nuclear warfare.

Geist, steeped in the history and tradecraft of deception, reckons the smartest agent — be it meat or machine — can be rendered self-destructively stupid by an elegant bit of subterfuge. Fakery is so cheap, easy and effective, Geist envisions a future in which artificially intelligent “fog-of-war machines” create a world that favours neither beligerents nor conciliators, but deceivers: “those who seek to confound and mislead their rivals.”

In Geist’s hands, Suleyman’s “infocalypse” becomes a weapon, far cleaner and cheaper than any mere bomb. Imagine future wars fought entirely through mind games. In this world of shifting appearances, littered with bloody accidents and mutual misconstruals, people are persuaded that their adversary does not want to hurt them. Rather than living in fear of retaliation, they come to realise the adversary’s values are, and always have been, better than their own.
Depending on your interests, your politics, and your sensitivity to disinformation, you may well suspect that this particular infocalyptic future is already upon us.

And, says Geist, at his most Machiavellian (he is the most difficult of the writers here; also the most enjoyable): “would it not be much more preferable for one’s adversaries to decide one had been right all along, and welcome one’s triumph?”

 

“Trees and bushes and hills. Or houses and streets. Or rooms and furniture.”

Reading The Wolves of Eternity by Karl Ove Knausgaard for the Telegraph, 12 September 2023

1986: nineteen-year-old Syvert Løyning returns home from navy service to find his widowed mother chain-smoking. If she’s not dying of lung cancer, she’s making a good stab at it. Her declining health leaves Syvert spending most of his evenings looking after his younger brother Joar. His efforts in this direction are sincere but often ineffective.

This is Knausgaard’s long, compelling prequel to 2021’s The Morning Star, a novel which saw time collapse and brought the realms of the living and the dead into collision. To this metaphysical malarkey, The Wolves… offers the unprepared reader the coldest of cold openers. What should we make of Syvert’s slow, ineluctable decline amid the edgelands of southern Norway (woods and heaths and lakes, football fields and filling stations)? What of his equally slow recovery, as he acquires a girlfriend (Lisa), a job (at an undertaker’s) and a purpose — tracking down his dead father’s secret second family in the former Soviet Union?

Half way through, The Wolves… breaks off and moves to present-day [Russia] to follow Syvert’s lost relation Alevtina on her way to her step-dad’s 80th birthday party. Alevtina describes herself self-deprecatingly (and accurately) as “a kind of hippy biologist who talked to trees”. Knausgaard makes a show of giving her the same close attention he devoted to Syvert, but there’s a sight more handwaving going on now: a sophomoric piling-up of cultural capital about Death, Hell and various species of Russian messianism and biophilia. Never trust a protagonist who’s a college lecturer.

Both halves of The Wolves… have their strengths, of course. Alevtina’s half is flashier by far; consider this time-reversed vision of some woods: “If you filmed that, an injured or sick animal that crept away to hide, died and rotted, and then ran that film backwards, the soil would pull apart to become an animal that rose up and slunk away.”

But there’s a greater power, I reckon, to be gleaned in the ordinariness of things. The pity we feel as Syvert sorts out his father’s old boxes, “not to get closer to him, more the opposite, to remove him from me, put him back in his boxes, back with his things”. And terror, when it occurs to Syvert that the World (which The Wolves… sees to its end) is only “what the eyes could see… A sort of duct in front of and behind us, with various things in it. Trees and bushes and hills. Or houses and streets. Or rooms and furniture.”

Knausgaard should resist the siren call of his library card, and go on writing very big books about nothing. The less The Wolves… is about, the more it has to say.

An all-out cyberwar is coming

Watching Billion Dollar Heist by Daniel Gordon for New Scientist, 6 September 2023

On Thursday, 4 February 2016, after a year of meticulous malware-enabled close observation of its computer systems, an international criminal group called Lazarus tried to steal a billion dollars from Bangladesh Bank. The country’s central bank was a soft target, with no firewall, and simple $10 electronic switches connecting it to the SWIFT global payment system — used by over 11,000 financial institutions around the world.

The Federal Reserve in New York, meanwhile, is the largest bank in the world, housed in one of its most secure buildings, with its own power plant, water supply and communications system. One problem: back in 2016, it hadn’t thought to put in an emergency hotline for its customers. This, in an institution responsible for providing financial services to foreign central banks and international organisations as well as to the US government, has since proved to be, shall we say, a source of embarrassment.

An interview with British investigative journalist Misha Glenny provides the narrative for Billion Dollar Heist, a documentary that makes up, with its talking heads and comic-book graphics, what it lacks in expensive location shots. Reuters journalist Krishna Das guides us through the heist itself. Of the 35 financial transactions Lazarus attempted, the Federal Reserve Bank of New York cleared five, sending 101 million dollars in two directions: $20 million to Sri Lanka (where a spelling error raised a red flag and stopped the transaction) and $81 million to the Philippines, where Under Philippine banking laws, the stolen funds could not be frozen until a criminal case was lodged. Most of the $81 million disappeared into the country’s casino industry, which is exempted from anti-money laundering laws, and was lost, presumably forever.

Requests for payment continued to pour in, totalling around a billion dollars. By then, though, and frankly more by luck than good management, the fraud had been detected. (The fraud: not the hack. That took months to unpick.)

Finnish computer security expert Mikko Hyppönen and Eric Chien, technical director of Symantec’s Security Technology and Response division, lead the film’s discussion of the implications.

The Lazarus Group, bankrolled by the North Korean government, was responsible for the heist. In 2017, a year after the events recounted here, it attacked five Asian crypto exchanges and made off with $571 million.

If they worked purely to line their own pockets, this would be bad enough, but such organisations — and there are about a dozen of them, including APT 10 (backed by China) and Sandworm (backed by Russia) — are very much thieves for hire, riding the boom in state-sponsored cybercrime that’s been triggered, we’re told here, by the growing effectiveness of the global sanctions regime.

If the daylight world of international diplomacy stops your bank accounts, you know who to call.

Billion Dollar Heist is directed by Daniel Gordon, a sports documentary maker whose 2002 film, about the 1966 North Korea national football team drew him into more politically charged territory. True to his pedigree, he spins a logistically complex story in terms that are easy to follow. No ponderous political generalisations cloud his narrative. This is a caper movie, albeit one with a vicious sting in the tale, as Misha Glenny spends the last few minutes of screentime preparing us for the world this heist and others are ushering in. The world hasn’t had an all-out cyberwar yet, but it’s coming, care of Lazarus and other groups the US State Department has designated “Advanced Persistent Threats”.

Health services, transport networks, communications, finance and the apparatus of government: all are a single human error away from compromise, and then annihilation.

Remember that, next time you forget your keys.

Taking in the garbage

Reading Interstellar by Avi Loeb for New Scientist, 30 August 2023

On 8 January 2014, a meteor exploded above the Pacific just north of Papua New Guinea’s Manus Island.

Five years later Amir Siraj, then a research assistant for Harvard astronomer Avi Loeb, spotted it in an online catalogue at the Center for Near-Earth Object Studies, part of NASA’s Jet Propulsion Laboratory.

Partway through Interstellar, Loeb explains why he thinks the meteor comes from outside the solar system. This would make it one of only three objects so identified. The first was ‘Oumuamua, detected in 2017: a football-field size pancake-shaped anomaly and the subject of Loeb’s book Interplanetary, to which Interstellar is a repetitive, frenetic, grandiose extension.

Since Interstellar was sent to press, Loeb’s team have gathered particles from the crash site and packed them off to to labs at Harvard University, the University of California, Berkeley, and the Bruker Corporation in Germany for further analysis. Metallic spherules from outside our solar system would be a considerable find in itself.

Meanwhile Loeb is publically airing a hypothesis which, thanks to an opinion piece on 10 February 2023, is already familiar to readers of New Scientist. He reckons this meteor might turn out to have been manufactured by extraterrestrials.

Already there has been some bad-tempered push-back, but Loeb does not care. He’s innoculated against other people’s opinions, he says in Interstellar, not least because “my first mentor in astrophysics… had a professional rival, and when my mentor died it was his rival that was asked to write his obituary in a prestigious journal.”

Loeb, who has spent a career writing about black holes, dark matter and the deep time of the universe, does not waste time arguing for the existence of spacefaring extraterrestrials. Rather, he argues that we should be looking for spacefaring extraterrestrials, or at any rate for their gear. Among the possible scenarios for First Contact, “a human-alien handshake in front of the White House” is the least likely. It’s far more likely we’ll run into some garbage or a probe of some sort, and only then, says Loeb, because we’ve taken the trouble to seek it out.

Until very recently, no astronomical instrument was built for such a purpose. But this is changing, says Loeb, who cites NASA’s Unidentified Aerial Phenomena study, launched in December 2022, and the Legacy Survey of Space and Time — a 10-year-long high-resolution record of the entire southern sky, to be conducted on the brand-new Vera C. Rubin Observatory in Chile. Then there’s Loeb’s own brainchild, The Galileo Project, meant to bring the search for extraterrestrial technological signatures “from accidental or anecdotal observations and legends to the mainstream of transparent, validated and systematic scientific research.” The roof of the Harvard College Observatory boasts the project’s first sky-scanning apparatus.

There’s more than a whiff of Quixote about this project, but Loeb’s well within his rights to say that unless we go looking for extraterrestrials, we’re never going to find them. Loeb’s dating metaphor felt painfully hokey at first, but it grew on me: are we to be cosmic wallflowers, standing around on the off-chance that some stranger comes along? Or are we going to go looking for things we’ll never spot without a bit of effort?

Readers of grand speculations by the likes of Freeman Dyson and Stanislaw Lem will find nothing in Interstellar to make them blink, aside maybe from a rather cantankerous prose style. Can we be reassured by Loeb’s promise that he and his team work only with scientific data openly available for peer review, that they share their findings freely and only through traditional scientific channels, and will release no results except through scientifically accepted channels of publication?

I’m inclined to say yes, we should. Arguments from incredulity are always a bad idea, and sneering is never a good look.

A truth told backwards

Reading Disputed Inheritance by Gregory Radick for New Scientist, 23 August 2023

In 1865 Gregor Mendel, working to understand the mechanisms of hybridisation, discovered exquisitely simple and reliable patterns of inheritance in varieties of garden pea. Rediscovered in 1900, the patterns of inheritance described in his work revealed the existence of hereditary “particles”. Today, we call these particles “genes”.

Well, of course, there’s more to the story than this. In his ambitious and spirited history of the genetic idea, Leeds-based geneticist Gregory Radick accounts for our much more nuanced, sophisticated ideas of what genetics actually is, and in so doing he asks a deceptively simple question: why, knowing what we know now, do we still bother with Mendel? Why, when we explain genetics to people, do we reach for experiments conducted by a man who had no interest in heritability, never mind evolution, and whose conclusions about heritability (in as much as he ever made any) were quite spectacularly contradicted in experiments by Darwin’s favourite plantsman Thomas Laxton in 1866? (“Where Mendel’s pea hybrids always showed just the one parental character in colour and shape, Laxton’s,” says Radick, “were sometimes blended, sometimes wholly like the one parent, sometimes wholly like the other, and sometimes mosaically like both.”)

The evidence against Mendelian genetics began accumulating almost immediately after its 20th-century rediscovery. The “genetics” we talk about today isn’t Mendelian, it’s molecular, and it arose out of other sciences: microbiology, biochemistry, X-ray crystallography, and later a whole host of data-rich sequencing technologies. “Today’s genome,” Radick explains, “the post-genomic genome, looks more like… a device for regulating the production of specific proteins in response to the constantly changing signals [the cell] receives from its environment.”

The point is not that Mendel was “wrong”. (That would be silly, like saying Newton was “wrong” for not coming up with special relativity.) The point is that we have no real need to be thinking in Mendelian terms at all any more. Couching almost the whole of modern genetics as exceptions to Mendelian’s specious “rules” is to be constantly having to explain everything backwards.

Radick explains why this has happened, and what we can do about it. The seed of trouble was first sown in the battle (at first collegiate, then increasingly cantankerous) between the Cambridge-based William Bateson, who made it his mission to reshape biology in the image of Mendel’s experiments, and Oxford-based Walter Frank Raphael Weldon, who saw that Mendel (whose interest was hybrids, not heredity) had removed from his experiments as many ordinary sources of variability as he could. Real pea seeds are not always just yellow or green, or just round or wrinkled, and Weldon argued that actual variability should not be just idealised away.

“It seems to me that every character is at once inherited and acquired,” Weldon wrote, and of course he was right. The difficulty was what to do with that insight. “It is easy to say Mendelism does not happen,” he remarked to his friend Karl Pearson in March 1903, “but what the deuce does happen is harder every day!”

What Weldon needed, and what he pursued, was an alternative theory of heredity, but the book manuscript setting out his alternative vision was left unfinished at his death, from pneumonia, in April 1906.

Radick’s book champions the underdog, Weldon, over the victorious Bateson. Whether his account smacks of special pleading will depend on the reader’s education and interests. (Less than a century ago, geneticists in the Soviet Union faced ruin and even persecution as they defended the Mendelian idea, insufficient as that idea may seem to us now; temperatures in this field run high.)

This is not the first attempt to lay history’s ghosts to rest, and reset our ideas about genetics. That said, I can’t think of one that’s better argued, more fair-minded or more sheerly enjoyable.

Radiant with triumphant calamity

Reading Fear by Robert Peckham for the Telegraph, 25 August 2023 

Remember the UK intelligence claim that Saddam Hussein could strike the UK with a ballistic missile within 45 minutes? The story goes that this was spun out of a two-year-old conversation with a taxi driver on the Iraq-Jordan border. One thing’s for sure: fear breeds rumour breeds more fear.

Robert Peckham lives in fear, and claims we’re all of us entering “an era of insidious, mediatised fear”. This may be a case of misery seeking company. And you can see why: in 1988 this British historian of science (author of several well-received books about epidemics) narrowly missed getting blown up in a terrorist attack on the funeral of Abdul Ghaffar Khan in Jalalabad. More recently, in the summer of 2021, he quit his job at the University of Hong Kong where, he writes, ”fear was palpable… friends were being hounded by the authorities, news agencies shut down and opposition leaders jailed.”

With the spread of Covid-19, Peckham’s political and medical interests dovetailed in Hong Kong in grim fashion. “A pandemic turned out to be the ultimate anti-protest weapon,” he writes, “one that the city’s chief executive, Carrie Lam, deployed ruthlessly to stifle opposition.”

Fear is the story of how, over the last 700 years or so, power has managed and manipulated its subjects through dread: of natural disasters, pandemics, revolutions, technologies, financial crashes, wars and of course, through fear of the government itself.

We see how the Catholic Church tried and failed to canalise the horrors of the Black Death into sacral terror and obedience; how instead that fear powered the Reformation. There’s a revealing section about Shakespeare’s Hamlet, a play both steeped in fear and about it: and how fear is shown sometimes engendering, sometimes acting as a moral brake on violence. Through the bloody medium of the French revolution, we enter the modern era painfully aware that reason has proved more than capable of buttressing terror, and that the post-Enlightenment period is “radiant with triumphant calamity”.

Peckham’s history is as encyclopaedic as it is mirthless. After a striking and distressing chapter about the slave trade (every book should have one), Peckham even wonders whether “perhaps slavery has been so thoroughly embedded in free market capitalism that it can’t be dislodged, at least not without the collapse of the entire system”.

At this point, the reader is entitled tug on the reins and double check some figures on the United Nations website. And sure enough: in the 21st century alone global life expectancy has risen seven years, literacy has risen by nine per cent (to 91 per cent) and extreme poverty is about a third what it was at the beginning of this century.

Allow Peckham’s argument that the Machiavellian weaponisation of fear had a hand in all this: dare one suggest this was a price worth paying?

Of course this is far from the whole of Peckham’s argument. He says at the outset he wants to explore the role fear plays in promoting reform, as well as its use in repressing dissent. “What,” he asks, “would happen to all the public-spirited interventions that rely on the strategic use of fear to influence our behaviour? Don’t we need fear to take our problems seriously?”

It’s an interesting project. Too often, though, the focus on fear acts to dampen our responses, rather than enrich them. For instance, Peckham depicts Versailles as “a policed society” where “prescriptions on how to eat, talk, walk and dance kept courtiers in line, with the ever-present threat that they might be stripped of their privileges if rules of comportment were infringed”. This is at once self-evident and woefully incomplete, excluding as it does any talk of political aspiration, personal vanity, love of play, the temptations of gossip and the lure of luxe. This isn’t an insight into Versailles; it’s a gloomy version of Versailles.

There is a difference, it is true, between the trenches of Verdun, and the fear felt in those trenches, just as there is a difference between the NKVD knocking on your door, and your fear of the knock. But — and here’s the nub of the matter — is it a useful difference? Or is it merely a restatement of the obvious?

In the end, having failed to glean the riches he had hoped for, Peckham is left floundering: “Fear is always intersectional,” he writes, “an unnerving confluence of past, present and future, a convergence of the here and there.”

To which this reader replied, with some exasperation, “Oh, pull the other one!”

“Democratic leaders are more difficult to decipher”

Reading The Madman in the White House: Sigmund Freud, Ambassador Bullitt and the Lost Psychobiography of Woodrow Wilson by Patrick Weil, for the Spectator, 29 July 2023

It was a vision US president Woodrow Wilson could not resist, and it was instrumental in bringing the US into the Great War.

The treaty at Versailles and the League of Nations founded during the negotiations were meant, not just to end the current war, but to end all future wars, by ensuring that a country taking up arms against one signatory would be treated as a belligerent by all the others.

Wilson took his advisor Edward “Colonel” House’s vision of a new world order and careered off with it.

Against advice, he attended Versailles in person, let none of his staff in with him during negotiations, was quickly overwhelmed, saw his principled “fourteen points” deluged by special provisions and horse-trading, and returned home, convinced, first, that his dearest close colleagues had betrayed him (which they hadn’t); second, that the League of Nations alone could mend what the Treaty of Versailles had left broken or made worse (which it didn’t); third, that that he was the vessel of divine will, and that what the world needed from him at this crucial hour was a show of principle and Christ-like sacrifice. “The stage is set,” he declared to a dumbfounded and sceptical Senate, “the destiny is disclosed. It has come about by no plan of our conceiving, but by the hand of God who has led us into this way. We cannot turn back.”

Winston Churchill had Wilson’s number: “Peace and Goodwill among all Nations abroad, but no truck with the Republican party at home. That was his ticket and that was his ruin and the ruin of much else as well.”

When it was clear he would not get everything he wanted, Wilson destroyed the bill utterly, needlessly ending US involvement in the League before it had even begun.

Several developments followed ineluctably from this. Another world war, of course; also a small but vibrant cottage industry in books explaining just what in hell Wilson had thought he was playing at. Patrick Weil’s new book captures the anger and confusion of the period. His evenness of manner that sets the hysteria of his subjects into high relief; take, for example, Sigmund Freud, who wrote of Wilson: “As far as a single person can be responsible for the misery of this part of the world, he surely is.”

Anger and a certain literary curiosity drove Freud to collaborate with William C. Bullitt, Wilson’s erstwhile advisor, on a psychobiography of the man. His daughter Anna hated the final book, but one has to assume she was dealing with her own daddy issues.

Delayed by decades so as not to derail Bullitt’s political career, Thomas Woodrow Wilson: a psychological study was published in bowdlerised form, and to no very positive fanfare, in 1967. Bullitt, by then a veteran anti-communist, was chary of handing ammunition to the enemy, and suppressed his book’s most sensational interpretations, involving Wilson’s suppressed homosexuality, his overbearing father, and his Christ complex.

In 2014, Weil, a political scientist based in Paris, happened upon the original 1932 manuscript.

Are the revelations contained there enough on their own to sustain a new book? Weil is circumspect, enriching his account with a quite detailed and acute psychological biography of his own — of William Bullitt. Bullitt was a democratic idealist and political insider who found himself pushed into increasingly hawkish postures by his all-too-clear appreciation of the threat posed by Stalin’s Soviet Union. He made strange bedfellows over the years: on his deathbed he received a friendly note from Richard Nixon, “Congratulations incidentally on driving the liberal establishment out of their minds with your Wilson.”

Those readers who’ve come for bread and circuses will find that John Maynard Keynes’s 1919 book “The Economic Consequences of the Peace” chopped up Wilson nicely long ago, tracing “the disintegration of the president’s moral position and the clouding of his mind.” Then there’s the 1922 Wilson psychobiography that Freud did not endorse, though he wrote privately to its author, William Bayard Hale, that his linguistic analysis of Wilson’s prolix pronouncements had “a true spirit of psychoanalysis in it.” Then there’s Alexander and Juliet Georges’ Woodrow Wilson and Colonel House, a psychological portrait from 1956 that argues that Wilson’s father represented a superego whom Wilson could never satisfy. And there are several others, if you go looking.

So, is Madman a conscientious but unnecessary book about Wilson? Or an insightful but rather oddly proportioned book about Bullitt? The answer is both, nor is this necessarily a drawback. Bullitt’s growing disillusion with Wilson, his hurt, and ultimately his contempt for the man, shaped him as surely as his curiously unhappy childhood and a formative political debacle at Princeton shaped Woodrow Wilson.

“Dictators are easy to read,” Weil writes. “Democratic leaders are more difficult to decipher. However, they can be just as unbalanced as dictators and can play a truly destructive role in our history.”

This is well put, but I think Weil’s portrait of Bullitt demonstrates something broader and rather more hopeful: that politics — even realpolitik — is best understood as an affair of the heart.

The press of a single red button

Watching Christopher Nolan’s Oppenheimer for New Scientist, 19 July 2023

At 05.29 and 45 seconds on 16 July 1945, an electrical circuit clicks shut and thirty-two detonators fire, driving a uranium plug into a core of plutonium. The plutonium fissions, each atom splitting into lighter elements, a blast of gamma radiation and two or three more neutrons, which hurtle forth, triggering further reactions. A new world order is born: one in which the human species has the capacity to all-but wipe itself from the face of the planet; a world in which the terror of annihilation helps avert global conflict, unevenly, at great cost, and by no means necessarily for ever.

J Robert Oppenheimer directed atomic bomb development at Los Alamos in New Mexico, and then spent many subsequent years arguing for international arms control, and against US development of the even more powerful fusion bomb. Not only did he midwife this new Cold War world into being; he gave us the vocabulary with which to talk about it, agonise over it, and fear it.

It is possible to miss the point of Christopher Nolan’s superb biopic of Oppenheimer. One and a half hours of screen time follow the successful Trinity test of an atomic device. If all that interests you is how Nolan, a filmmaker famously wedded to analogue production and real (70mm IMAX) film, conveys the scale of an atomic explosion, you’re in for a long haul.

Oppenheimer is about the war in its hero’s head. It reflects the world in which Oppenheimer actually operated. It’s a film set in lecture rooms and laboratories, in living rooms and kitchens, shacks and bunkers. (The horror of Hiroshima is conveyed quite simply: Oppenheimer, sat in front of footage of the aftermath, cannot stand to watch, and looks away.)

Following America’s use of two atomic bombs on Japan at the end of the second world war, walls shake, exposures wobble, continuity stutters and different film stocks are muddled together to convey Oppenheimer’s increasingly nightmarish experience of the new reality. Were Nolan’s story (drawn from Kai Bird and Martin Sherwin’s biography American Prometheus) not so grippingly told, the final film, with its invarying pace, portentous, minimalist musical score and abiding humourlessness would, I suspect, prove unwatchable: like 2020’s Tenet, a film easier to read than to watch: a three-hour-long promo video.

What transforms Oppenheimer — and makes it, for my money at any rate, Nolan’s best film since 2006’s The Prestige — is the sheer crafti evident in the script.

The film orbits around two official hearings, both of which took place in the early fifties: Oppenheimer’s appeal against the revocation of his security clearance with the Atomic Energy Commission; and former AEC commissioner Lewis Strauss’s cabinet confirmation hearing as he tilted for reappointment as US Commerce Secretary. Those who know Strausss’s fraught attitudes towards Oppenheimer will relish Robert Downey Jr’s screen-chewing perfomance as the multifaceted Strauss. Those coming to the material fresh have a cracking twist in store, as the pair’s relationship comes to vivid life in the final act of the film.

Fragments of Oppenheimer’s odyssey — from theoretical astrophysicist to father of the atomic bomb — orbit these two centres of gravity. The narrative surface that results is as complex as anything Nolan has achieved before, but less confusing. Oppenheimer covers a staggering amount of intellectual historical and biographical ground, with nary a trace of gallumphing exposition. The script finds room to give Russian physicists given their due, and conveys very sensitively the internationalist sentiment that dominated research at Los Alamos.

Of course, the physicists and engineers at Los Alamos could think what they liked. There was a war on, and a Cold War to follow. Oppenheimer’s largely fruitless tilts at geopolitical realities after the war was over became emblematic of the plight of the conscience-stricken government scientist. His damaging run-ins with officialdom during the anti-communist scares of the 1950s only confirmed his status as a modern Prometheus, punished for handing atomic fire to humanity.

Strauss had little time for the idea of Oppenheimer-the-tragic-overreacher, and Nolan, funnily enough, seems to agree. At any rate, he finds no use for Oppenheimer’s own self-dramatising. (Oppenheimer, quoting the Bhagavad-Gita, used to notoriously bang on about becoming Death, Destroyer of Worlds; this dark flourish is got rid of early on.) Nolan is much more interested in Oppenheimer’s impossible bind: an intelligent man, by no means naive or “unpolitical”, whose background in academia and theory un-fits him for the world he helps create. Emily Blunt’s performance as Kitty, Oppenheimer’s increasingly embittered and partisan wife, is crucial, if almost wordless. Other big names flourish in supporting roles that allow them unusual freedom. Matt Damon is positively gruff as Leslie Groves, the general in charge of the Los Alamos project; Dane DeHaan relishes a gratingly unsympathetic portrait of Kenneth Nichols, director of US Army R&D; Bennie Safdie makes even the peaceniks among us fall in love with Edward Teller, hawkish father of the fusion bomb, a straight-shooting adversary Oppenheimer can’t help but shake by the hand, to Kitty’s lip-curling disgust.

Even before he starts acting, Cillian Murphy’s resting demeanour drips a sort of divine cluelessness that makes him a shoo-in for the role of Robert Oppenheimer. He goes on to deliver a shuddering performance that, more than any finely wrought dialogue, conveys the impossible moral bind of scientists recruited into government service.

To know the world is to change it. On 16 July 1945, knowledge and deed were separated by the press of a single red button. Oppenheimer takes three hours to explain why this moment matters, and there’s not a second of screentime wasted. It’s a rich, strange, compelling film. A tragedy, yes — and a triumph.