Apocalypse Now Lite

Watching Gareth Edwards’s The Creator for New Scientist, 4 October 2023

A man loses his wife in the war with the robots. The machines didn’t kill her; human military ineptitude did. She was pregnant with his child. The man (played by John David Washington, whose heart-on-sleeve performance can’t quite pull this film out of the fire) has nothing to live for, until it turns out that his wife is alive and working with the robots to build a weapon. The weapon turns out to be a robot child (an irresistible performance by 7-year-old Madeleine Yuna Voyles) who possesses the ability to control machines at a distance. Man and weapon go in search of the man’s wife; they’re a family in wartime, trying to reconnect, and their reconnection will end the war and change everything.

The Creator’s great strength is its futuristic south-east Asian setting. (You know a film has problems when the reviewer launches straight in with the set design.) Police drones like mosquitos rumble overhead. Mantis-headed robots in red robes ring temple bells to warn of American air attack.

The Creator is Apocalypse Now Lite: the Americans aggressors have been traumatised by the nuking of Los Angeles — an atrocity they blame on their own AI. They’ve hurled their own robots into the garbage compactor (literally — a chilling up-scaled retread of that Star Wars scene). But South East Asia has had the temerity to fall in love with AI technology. They’re happy to be out-evolved! The way a unified, Blade-Runner-esque “New Asia” sees it, LA was an accident a long way away; people replace people all the time; and a robot is a person.

Hence: war. Hence: rural villages annihilated under blue laser light. Hence: missiles launched from space against temple complexes in mountain fastnesses. Hence: river towns reduced to matchwood under withering small-arms fire.

If nothing else, it’s spectacular.

The Creator is not so much a stand-alone sf blockbuster as a game of science fiction cinema bingo. Enormous battle tanks, as large as the villages they crush? think Avatar. A very-low-orbit space station, large enough to be visible in the daytime? think Oblivion. Child with special powers? think Stranger Things. The Creator is a science fiction movie assembled from the tropes of other science fiction movies. If it is not as bankrupt as Ridley Scott’s Alien prequels Prometheus and Covenant (now those were bad movies), it’s because we’ve not seen south-east Asia cyborgised before (though readers of sf have been inhabiting such futures for over thirty years) and also because director Gareth Edwards once again proves that he can pull warm human performances from actors lumbered with any amount of gear, sweating away on on the busiest, most cluttered and complex set.

This is not nothing. Nor, alas, is it enough.

As a film school graduate Gareth Edwards won a short sci-fi film contest in London, and got a once in a lifetime chance to make a low budget feature. Monsters (2010) managed to be both a character piece and a love story and a monster movie all in one. On the back of it he got a shot at a Star Wars spin-off in 2014, which hijacked the entire franchise (everyone loved Rogue One and its TV spin-off Andor is much admired; Disney’s own efforts at canon have mostly flopped).

The Creator should have been Edwards’s Star Wars. Instead, something horrible has happened in the editing. Vital lines are being delivered in scenes so truncated, it’s as though the actors are explaining the film directly to the audience. Every few minutes, tears run down Washington’s face, Voyles’s chin trembles, and we have no idea, none, what brought them to their latest crescendo — and ooh look, that goofy running bomb! That reminds me of Sky Captain and the World of Tomorrow…

The Creator is a fine spectacle. What we needed was a film that had something to say.

We’ve learned a valuable lesson today

Watching M3gan, directed by Gerard Johnstone, for New Scientist, 25 January 2023

Having done something unspeakable to a school bully’s ear, chased him through the forest like a wolf, and driven him under the wheels of a passing car, M3gan, the world’s first “Model 3 Generative Android”, returns to comfort Cady, its inventor’s niece. “We’ve learned a valuable lesson today,” she whispers.

So has the audience, between all their squealing and cheering. Before you ask a learning machine to do something for you, it helps if you know what that thing actually is.

M3gan has been tasked by its inventor Gemma (Allison Williams, in her second Blumfield-produced movie since the company’s 2017 smash Get Out) with looking after her niece Cady (Violet McGraw), recently orphaned when her parents — arguing over who should police her screen time — drove them all under a snow truck.

M3gan is told to protect Cady from physical and emotional harm. What could possibly go wrong with that?

Quite a lot, it turns out. Gemma works for toy company Funki, whose CEO David (comedian Ronny Chieng) is looking for a way — any way — to “kick Hasbro right in the d—.” In a rush to succeed, Gemma ends up creating a care robot that (to paraphrase Terminator) absolutely will not stop caring. M3gan takes very personally indeed the ordinary knocks that life dishes out to a kid.

The robot — a low-budget concoction of masks and CGI, performed by Amie Donald and voiced by Jenna Davis — is an uncanny glory. But the signature quality of Blumfield’s films is not so much their skill with low budgets, as the company’s willingness to invest time and money on scripts. In developing M3gan, James Wan (who directed the 2004 horror film Saw) and Akela Cooper (whose first-draft screenplay was, by her own admission, “way gorier”) discovered in the end that there was more currency in mischief than in mayhem. This is the most sheerly gleeful horror movie since The Lost Boys.

Caring for a child involves more than distracting them. Alas M3gan, evolving from Funki’s “Purrfect Petz” (fuzzballs that quote Wikipedia while evacuating plastic pellets from their bowels) cannot possibly understand this distinction.

The point of parenting is to manage your own failure, leaving behind a child capable of handling the world on their own. M3gan, on the contrary, has absolutely no intention of letting Cady grow up. As far as M3gan is concerned, experience is the enemy.

In this war against the world M3gan transforms, naturally enough, into a hyperarticulated killing machine (and the audience cheers: this is a film built on anticipation, not surprise).

M3gan’s charge, poor orphaned Cady, is a far more frightening creation: a bundle of hurt and horror afforded no real guidance, adrift without explanations in a world where (let’s face it) everything will eventually die and everything will eventually go wrong. The sight of a screaming nine-year-old Cady slapping her well-intentioned but workaholic aunt across the face is infinitely more disturbing than any scene involving M3gan.

“Robotic companionship may seem a sweet deal,” wrote the social scientist Sherry Turkle back in 2011, “but it consigns us to a closed world — the loveable as safe and made to measure.”

Cady, born into a world of fatuous care robots, eventually learns that the only way to get through life is to grow up.

But the real lesson here is for parents. The robot exists to do what we can imagine doing, but would rather not do. And that’s fine, except that it assumes that we always know what’s in our own best interests.

I remember in 2014, at a conference on human-machine interaction, I watched a a video starring Nao, a charming “educational robot”. It took a while before someone in the audience (not me, to my shame) spotted the film’s obvious flaw: how come it shows a mother sweating away in the kitchen while a robot is enjoying quality time with her child?

“Does it all stop at the tree?”

Watching Brian and Charles, directed by Jim Archer, for New Scientist, 6 July 2022

Amateur inventor Brian Gittins has been having a bad time. He’s painfully shy, living alone, and has become a favourite target of the town bully Eddie Tomington (Jamie Michie).

He finds some consolation in his “inventions pantry” (“a cowshed, really”), from which emerges one ludicrously misconceived invention after another. His heart is in the right place; his tricycle-powered “flying cuckoo clock”, for instance, is meant as a service to the whole village. People would simply have to look up to tell the time.

Unfortunately, Brian’s invention is already on fire.

Picking through the leavings of fly-tippers one day, the ever-manic loner finds the head of a shop mannequin — and grows still. The next day he sets about building something just for himself: a robot to keep him company as he grows ever more graceless, ever more brittle, ever more alone.

Brian Gittins sprang to life on the stand-up and vlogging circuit trodden by his creator, comedian and actor David Earl. Earl’s best known for playing Kevin Twine in Ricky Gervais’s sit-com Derek, and for smaller roles in other Gervais projects including Extras and After Life. And never mind the eight-foot tall robot: Earl’s Brian Gittins dominates this gentle, fantastical film. His every grin to camera, whenever an invention fails or misbehaves or underwhelms, is a suppressed cry of pain. His every command to his miraculous robot (“Charles Petrescu” — the robot has named himself) drips with underconfidence and a conviction of future failure. Brian is a painfully, almost unwatchably weak man. But his fortunes are about to turn.

The robot Charles (mannequin head; washing machine torso; tweeds from a Kenneth Clark documentary) also saw first light on the comedy circuit. Around 2016 Rupert Majendie, a producer who likes to play around with voice-generating software, phoned up Earl’s internet radio show (best forgotten, according to Earl; “just awful”) and the pair started riffing in character: Brian, meet Charles.

Then there were three: Earl’s fellow stand-up Chris Hayward inhabited Charles’s cardboard body; Earl played Brian, Charles’s foil and straight-man; meanwhile Majendie sat at the back of the venue (pubs and msuic venues; also London’s Soho Theatre) with his laptop, providing Charles’s voice. This is Brian and Charles’s first full-length film outing, and it was a hit with the audience at this year’s Sundance Film Festival.

In this low-budget mockumentary, directed by Jim Archer, a thunderstorm brings Brian’s robot to life. Brian wants to keep his creation all to himself. In the end, though, his irrepressible robot attracts the attention of Tomington family, his brutish and malign neighbours, who seem to have the entire valley under their thumb. Charles passes at lightning speed through all the stages of childhood (“Does it all stop at the tree?” he wonders, staring over Brian’s wall at the rainswept valleys of north Wales) and is now determined to make his own way to Honolulu — a place he’s glimpsed on a travel programme, but can never pronounce. It’s a decision that draws him Charles out from under Brian’s protection and, ineluctably, into servitude on the Tomingtons’ farm.

But the experience of bringing up Charles has changed Brian, too. He no longer feels alone. He has a stake in something now. He has, quite unwittingly, become a father. The confrontation and crisis that follow are as satisfying and tear-jerking as they are predictable.

Any robot adaptable enough to offer a human worthwhile companionship must, by definition, be considered a person, and be treated us such, or we would be no better than slave-owners. Brian is a graceless and bullying creator at first, but the more his robot proves a worthy companion, the more Brian’s behaviour matures in response. This is Margery Williams’s 1922 children’s story The Velveteen Rabbit in reverse: here, it’s not the toy that needs to become real; it’s Brian, the toy’s human owner.

And this, I think, is the exciting thing about personal robots: not that they could make our lives easier, or more convenient, but that their existence would challenge us to become better people.

Don’t stick your butter-knife in the toaster

Reading The End of Astronauts by Donald Goldsmith and Martin Rees for the Times, 26 March 2002

NASA’s Space Launch System, the most powerful rocket ever built, is now sitting on the launch pad. It’s the super heavy lifting body for Artemis, NASA’s international programme to establish a settlement on the Moon. The Artemis consortium includes everyone with an interest in space, from the UK to the UAE to Ukraine, but there are a few significant exceptions: India, Russia, and China. Russia and China already run a joint project to place their own base on the Moon.

Any fool can see where this is going. The conflict, when it comes, will arise over control of the moon’s south pole, where permanently sunlit pinnacles provide ideal locations for solar collectors. These will power the extraction of ice from permanently night-filled craters nearby. And the ice? That will be used for rocket fuel.

The closer we get to putting humans in space, the more familiar the picture of our future becomes. You can get depressed about that hard-scrabble, piratical future, or exhilarated by it, but you surely can’t be surprised by it.

What makes this part of the human story different is not the exotic locations. It’s the fact that wherever we want to go, our machines will have to go there first. (In this sense, it’s the *lack* of strangeness and glamour that will distinguish our space-borne future — our lives spent inside a chain of radiation-hardened Amazon fulfilment centres.)

So why go at all? The argument for “boots on the ground” is more strategic than scientific. Consider the achievements of NASA’s still-young Perseverance lander, lowered to the surface of Mars at the end of 2018, and with it a lightweight proof-of-concept helicopter called Ingenuity. Through these machines, researchers around the world are already combing our neighbour planet for signs of past and present life.

What more can we do? Specifically, what (beyond dying, and most likely in horrible, drawn-out ways) can astronauts do that space robots cannot? And if robots do need time to develop valuable “human” skills — the ability to spot geographical anomalies, for instance (though this is a bad example, because machines are getting good at this already) — doesn’t it make sense to hold off on that human mission, and give the robots a chance to catch up?

The argument to put humans into space is as old as NASA’s missions to the moon, and to this day it is driven by many of that era’s assumptions.

One was the belief (or at any rate the hope) that we might make the whole business cheap and easy by using nuclear-powered launch vehicles within the Earth’s atmosphere. Alas, radiological studies nipped that brave scheme in the bud.

Other Apollo-era assumptions have a longer shelf-life but are, at heart, more stupid. Dumbest of all is the notion — first dreamt up by Nikolai Fyodorov, a late-nineteenth century Russian librarian — that exploring outer space is the next stage in our species’ evolution. This stirring blandishment isn’t challenged nearly as often as it ought to be, and it collapses under the most cursory anthropological or historical interrogation.

That the authors of this minatory little volume — the UK’s Astronomer Royal and an award-winning space sciences communicator —
beat Fedorov’s ideas to death with sticks is welcome, to a degree. “The desire to explore is not our destiny,” they point out, “nor in our DNA, nor innate in human cultures.”

The trouble begins when the poor disenchanted reader asks, somewhat querulously, Then why bother with outer space at all?

Their blood lust yet unslaked, our heroes take a firmer grip their cudgels. No, the moon is not “rich” in helium 3, harvesting it would be a nightmare, and the technology we’d need so we can use it for nuclear fusion remains hypothetical. No, we are never going to be able to flit from planet to planet at will. Journey times to the outer planets are always going to be measured in years. Very few asteroids are going to be worth mining, and the risks of doing so probably outweigh the benefits. And no, we are not going to terraform Mars, the strongest argument against it being “the fact that we are doing a poor job of terraforming Earth.” In all these cases it’s not the technology that’s against us, so much as the mathematics — the sheer scale.

For anyone seriously interested in space exploration, this slaughter of the impractical innocents is actually quite welcome. Actual space sciences have for years been struggling to breathe in an atmosphere saturated with hype and science fiction. The superannuated blarney spouted by Messrs Musk and Bezos (who basically just want to get into the mining business) isn’t helping.

But for the rest of us, who just want to see some cool shit — will no crumb of romantic comfort be left to us?

In the long run, our destiny may very well lie in outer space — but not until and unless our machines overtake us. Given the harshness and scale of the world beyond Earth, there is very little that humans can do there for themselves. More likely, we will one day be carried to the stars as pets by vast, sentimental machine intelligences. This was the vision behind the Culture novels of the late great Iain Banks. And there — so long as they got over the idea they were the most important things in the universe — humans did rather well for themselves.

Rees and Goldsmith, not being science fiction writers, can only tip their hat to such notions. But spacefaring futures that do not involve other powers and intelligences are beginning to look decidedly gimcrack. Take, for example, the vast rotating space colonies dreamt up by physicist Gerard O’Neill in the 1970s. They’re designed so 20th-century vintage humans can survive among the stars. And this, as the authors show, makes such environments impossibly expensive, not to mention absurdly elaborate and unstable.

The conditions of outer space are not, after all, something to be got around with technology. To survive in any numbers, for any length of time, humans will have to adapt, biologically and psychologically, beyond their current form.

The authors concede that for now, this is a truth best explored in science fiction. Here, they write about immediate realities, and the likely the role of humans in space up to about 2040.

The big problem with outer space is time. Space exploration is a species of pot-watching. Find a launch window. Plot your course. Wait. The journey to Mars is a seven-month curve covering more than ten times the distance between Mars and Earth at their closest conjunction — and the journey can only be made once every twenty-six months.

Gadding about the solar system isn’t an option, because it would require fuel your spacecraft hasn’t got. Fuel is great for hauling things and people out of Earth’s gravity well. In space, though, it becomes bulky, heavy and expensive.

This is why mission planners organise their flights so meticulously, years in advance, and rely on geometry, gravity, time and patience to see their plans fulfilled. “The energy required to send a laboratory toward Mars,” the authors explain, “is almost enough to carry it to an asteroid more than twice as far away. While the trip to the asteroid may well take more than twice as long, this hardly matters for… inanimate matter.”

This last point is the clincher. Machines are much less sensitive to time than we are. They do not age as we do. They do not need feeding and watering in the same way. And they are much more difficult to fry. Though capable of limited self-repair, humans are ill-suited to the rigours of space exploration, and perform poorly when asked to sit on their hands for years on end.

No wonder, then, that automated missions to explore the solar system have been NASA’s staple since the 1970s, while astronauts have been restricted to maintenance roles in low earth orbit. Even here they’re arguably more trouble than they’re worth. The Hubble Space Telescope was repaired and refitted by astronauts five times during its 40-year lifetime — but at a total cost that would have paid for seven replacement telescopes.

Reading The End of Astronauts is like being told by an elderly parent, again and again, not to stick your butter-knife in the toaster. You had no intention of sticking your knife in the toaster. You know perfectly well not to stick your knife in the toaster. They only have to open their mouths, though, and you’re stabbing the toaster to death.

An inanimate object worshipped for its supposed magical powers

Watching iHuman dircted by Tonje Hessen Schei for New Scientist, 6 January 2021

In 2010 she made Play Again, exploring digital media addiction among children. In 2014 she won awards for Drone, about the CIA’s secret role in drone warfare.

Now, with iHuman, Tonje Schei, a Norwegian documentary maker who has won numerous awards for her explorations of humans, machines and the environment, tackles — well, what, exactly? iHuman is a weird, portmanteau diatribe against computation — specifically, that branch of it that allows machines to learn about learning. Artificial general intelligence, in other words.

Incisive in parts, often overzealous, and wholly lacking in scepticism, iHuman is an apocalyptic vision of humanity already in thrall to the thinking machine, put together from intellectual celebrity soundbites, and illustrated with a lot of upside-down drone footage and digital mirror effects, so that the whole film resembles nothing so much as a particularly lengthy and drug-fuelled opening credits sequence to the crime drama Bosch.

That’s not to say that Schei is necessarily wrong, or that our Faustian tinkering hasn’t doomed us to a regimented future as a kind of especially sentient cattle. The film opens with that quotation from Stephen Hawking, about how “Success in creating AI might be the biggest success in human history. Unfortunately, it might also be the last.” If that statement seems rather heated to you, go visit Xinjiang, China, where a population of 13 million Turkic Muslims (Uyghurs and others) are living under AI surveillance and predictive policing.

Not are the film’s speculations particularly wrong-headed. It’s hard, for example, to fault the line of reasoning that leads Robert Work, former US under-secretary of defense, to fear autonomous killing machines, since “an authoritarian regime will have less problem delegating authority to a machine to make lethal decisions.”

iHuman’s great strength is its commitment to the bleak idea that it only takes one bad actor to weaponise artificial general intelligence before everyone else has to follow suit in their own defence, killing, spying and brainwashing whole populations as they go.

The great weakness of iHuman lies in its attempt to throw everything into the argument: :social media addiction, prejudice bubbles, election manipulation, deep fakes, automation of cognitive tasks, facial recognition, social credit scores, autonomous killing machines….

Of all the threats Schei identifies, the one conspicuously missing is hype. For instance, we still await convincing evidence that Cambrdige Analytica’s social media snake oil can influence the outcome of elections. And researchers still cannot replicate psychologist Michal Kosinski’s claim that his algorithms can determine a person’s sexuality and even their political leanings from their physiology.

Much of the current furore around AI looks jolly small and silly one you remember that the major funding model for AI development is advertising. Most every millennial claim about how our feelings and opinions can be shaped by social media is a retread of claims made in the 1910s for the billboard and the radio. All new media are terrifyingly powerful. And all new media age very quickly indeed.

So there I was hiding behind the sofa and watching iHuman between slitted fingers (the score is terrifying, and artist Theodor Groeneboom’s animations of what the internet sees when it looks in the mirror is the stuff of nightmares) when it occurred to me to look up the word “fetish”. To refresh your memory, a fetish is an inanimate object worshipped for its supposed magical powers or because it is considered to be inhabited by a spirit.

iHuman’s is a profoundly fetishistic film, worshipping at the altar of a God it has itself manufactured, and never more unctiously as when it lingers on the athletic form of AI guru Jürgen Schmidhuber (never trust a man in white Levis) as he complacently imagines a post-human future. Nowhere is there mention of the work being done to normalise, domesticate, and defang our latest creations.

How can we possibly stand up to our new robot overlords?

Try politics, would be my humble suggestion.

We, Robots

‘A glorious delve into the many guises of robots and artificial intelligences. This book is a joy and a triumph.’

SFF World

Published on 19 December 2020 by Head of Zeus, We, Robots presents 100 of the best SF short stories on artificial intelligence from around the world. From 1837 through to present day, from Charles Dickens to Cory Doctorow, these stories demonstrate humanity’s enduring fascination with artificial creation. Crafted in our image, androids mirror our greatest hopes and darkest fears: we want our children to do better and be better than us, but we also place ourselves in jeopardy by creating beings that may eventually out-think us.

A man plans to kill a simulacrum of his wife, except his shrink is sleeping with her in Robert Bloch’s ‘Comfort Me, My Robot’. In Ken Liu’s ‘The Caretaker’, an elderly man’s android careworker is much more than it first appears. We, Robots collects the finest android short stories the genre has to offer, from the biggest names in the field to exciting rising stars.

An embarrassment, a blowhard, a triumph

Watching Star Trek: Picard for New Scientist, 24 January 2020

Star Trek first appeared on television on 8 September 1966. It has been fighting the gravitational pull of its own nostalgia ever since – or at least since the launch of the painfully careful spin-off Star Trek: The Next Generation 21 years later.

The Next Generation was the series that gave us shipboard counselling (a questionable idea), a crew that liked each other (a catastrophically mistaken idea) and Patrick Stewart as Jean-Luc Picard, who held the entire farrago together, pretty much single-handed, for seven seasons.

Now Picard is back, retired, written off, an embarrassment and a blowhard. And Star Trek: Picard is a triumph, praise be.

Something horrible has happened to the “synthetics” (read: robots) who, in the person of Lieutenant Commander Data (Brent Spiner, returning briefly here) once promised so much for the Federation. Science fiction’s relationship with its metal creations is famously fraught: well thought-through robot revolt provided the central premise for Battlestar Galactica and Westworld, while Dune, reinvented yet again later this year as a film by Blade Runner 2049‘s Denis Villeneuve, is set in a future that abandoned artificial intelligence following a cloudy but obviously dreadful conflict.

And there is a perfectly sound reason for this mayhem. After all, any machine flexible enough to do what a robot is expected to do is going to be flexible enough to down tools – or worse. What Picard‘s take on this perennial problem will be isn’t yet clear, but the consequences of all the Federation’s synthetics going haywire is painfully felt: it has all but abandoned its utopian remit. It is now just one more faction in a fast-moving, galaxy-wide power arena (echoes of the Trump presidency and its consequences are entirely intentional).

Can Picard, the last torchbearer of the old guard, bring the Federation back to virtue? One jolly well hopes so, and not too quickly, either. Picard is, whatever else we may say about it, a great deal of fun.

There are already some exciting novelties, though the one I found most intriguing may turn out to be a mere artefact of getting the show off the ground. Picard’s world – troubled by bad dreams quite as much as it is enabled by world-shrinking technology – is oddly surreal, discontinuous in ways that aren’t particularly confusing but do jar here and there.

Is the Star Trek franchise finally getting to grips with the psychological consequences of its mastery of time and space? Or did the producers simply shove as much plot as possible into the first episode to get the juggernaut rolling? The latter seems more likely, but I hold out hope.

The new show bears its burden of twaddle. The first episode features a po-faced analysis of Data’s essence. No, really. His essence. That’s a thing, now. How twaddle became an essential ingredient on The Next Generation – and now possibly Picard – is a mystery: the original Star Trek never felt the need to saddle itself with such single-use, go-nowhere nonsense. But by now, like a hold full of tribbles, the twaddle seems impossible to shake off (Star Trek: Discovery, I’m looking at you).

Oh, but why cavil? Stewart brings a new vulnerability and even a hint of bitterness to grit his seemlessly fluid recreation of Picard, and the story promises an exciting and fairly devastating twist to the show’s old political landscape. Picard, growing old disgracefully? Oh, please make it so!

In Berlin: arctic AI, archeology, and robotic charades

Thanks (I assume) to the those indefatigable Head of Zeus people, who are even now getting my anthology We Robots ready for publication, I’m invited to this year’s Berlin International Literature Festival, to take part in Automatic Writing 2.0, a special programme devoted to the literary impact of artifical intelligence.

Amidst other mischief, on Sunday 15 September at 12:30pm I’ll be reading from a new story, The Overcast.

In the realm of mind games

By the end of the show, I was left less impressed by artificial intelligence and more depressed that it had reduced my human worth to base matter. Had it, though? Or had it simply made me aware of how much I wanted to be base matter, shaped into being by something greater than myself? I was reminded of something that Benjamin Bratton, author of the cyber-bible The Stack, said in a recent lecture: “We seem only to be able to approach AI theologically.”

Visiting AI: More Than Human at London’s Barbican Centre for the Financial Times, 15 May 2019.