The twist is, there is no twist

Watching Danny Boyle’s 28 Years Later for New Scientist, 18 June 2025

Here’s a bit of screenplay advice to nail above the desk: make your plots simple and your characters complicated.

We can polish off the story of 28 Years Later in a couple of paragraphs. It’s the late-coming third instalment in a series that began in 2002, with 28 Days Later. A lab-grown neurotoxic virus has spread uncontrollable, orgiastic rage across continental Europe. The infection is eventually quarantined to mainland Britain. International fleets ensure that no-one leaves Blighty.

Twelve-year-old Spike (Alfie Williams, a relative newcomer and definitely a face to follow) lives in the relative safety of a small northern island, connected to the mainland by a causeway passable only at low tide. At twelve years old (rather young for the task, but his dad reckons he’s ready) Spike leaves for the mainland to be blooded. Amid trackless forests (perhaps not quite trackless enough after 28 years; otherwise the film’s mis-en-scene is superb and chilling) Spike kills a very slow zombie, misses a blisteringly fast one, and generally gives a good account of himself.

But it sits ill with Spike, once he’s home, to be cheered as a hero by all these drunken villagers, even as his mother lies bedridden with a mysterious illness, and his father Jamie (Aaron Taylor-Johnson) seeks distraction with another woman. So Spike sneaks his mum (Jodie Comer) off the island and sets out with her in search of the only doctor he’s ever heard of — a painted lunatic who spends his days in the woods burning corpses.

The twist—and let’s face it, you’re agog for the twist—is that there is no twist. Having established the rules of this world in 2002’s 28 Days Later, writer Alex Garland has simply and wonderfully stuck to his guns. There are flourishes: a vanishingly small number of zombies have survived the initial viral outbreak to breed and become an almost-viable competitor species. Some of them now grow very big indeed, thanks to the “steroid-like” effects of the original infection. But these aren’t new attractions so much as patches and fixes, and they’re delivered very much in the make-and-mend-and-keep-going spirit that hangs over Spike’s doughty little island village.

Nothing is quite as it seems — when is it ever? — and every once in a while, Boyle mischievously intercuts Laurence Olivier’s Henry V with Great War newsreel and 28 Weeks Later zombie outbreak footage to imply a deeper, darker significance to the village’s homespun defence league and its culling expeditions. There are nods to folk horror, to Apocalypse Now, to Aliens 3 and to Predator. But this is not a tricksy movie, and its intent is clear: in this world so long steeped in horror, there’s going to be this human story, about loss and disillusion, about growing up and growing apart, about when to stand with others, and when to stand alone, and all conveyed through the credible words and reasonable actions of largely unexceptional human beings. The budget is modest (somewhere between $60 and 75 million). The casting is meticulous (see how Christopher Fulford plays Spike’s grandfather with an effortless friendliness that all the while implies some harrowing backstory). And don’t get me wrong: 28 Years Later is full of invention, laden with fan-pleasing call-backs and cineastic cap-tugging. But never once does it cheat. There’s not a single fatuous macguffin pulling us through. No dumb quest. No magical grail. No grand unmasking. Only the feeling spilling from Alfie Williams’s eyes as young Spike learns, line by line and scene by scene, what he must acquire, and what he must let go, if he’s to be a man in this world.

All credit to Days, whose fast and furious “infected” shocked and delighted us all in 2002; all credit, too, to 2007’s oddly overlooked Weeks, an ingenious sequel and quite as good an expansion on its original as Aliens was to Alien. But Years carries the crown, at least for now (there’s a second instalment coming).

Go down singing

Watching Joshua Oppenheier’s musical The End for New Scientist, 14 May 2025

Life on the planet’s surface has become nigh-on unbearable, but with money and resources enough, the finest feelings and highest aspirations of our culture can be perpetuated underground, albeit for only a chosen few. Michael Shannon plays an oil magnate who years ago brought his family to safety in an old mine. Here he rewrites his and his company’s history in a self-serving memoir dictated to his grown-up but critically inexperienced son (George Mackay; I last encountered him in The Beast, which I reviewed for New Scientist) while his wife, the boy’s mother, played by Tilda Swinton, curates an art collection somehow (and perhaps best not ask how) purloined from the great collections of the world.

The mine — an actual working salt mine in Patralia Soprana, Sicily — is simultaneously a place of wonder and constriction. You can walk out of the bunker and wander around its galleries, singing as you go (did I mention this was a musical?), but were you to hike outside the mine, I don’t fancy your chances. It’s a premise familiar from any number of post-apocalyptic narratives, from Return to the Planet of the Apes (1975) to last year’s streaming hit Fallout.

When a rare surface-dweller (Moses Ingram) stumbles on their home, it looks as though she’ll be expelled, and more likely killed, to keep this shangri-la a secret. But at the last moment the Boy cries, “I don’t want to do this!”

It turns out nobody else wants to, either, not even the Mother, who’s the most terrified of the bunch.

Clumsily, over two and a half hours, the family draw this stranger into their bubble of comforting lies. (The End is too long, but you could lay the same charge at most of the musicals on which it’s modelled — have you tried sitting through Oklahoma! recently?)

Lies — this is the film’s shocking premise — are necessary. Lies stand between us and despair. They create the bubbles in which kindness, generosity and love can be grown. Like the golden-age musicals of the 1950s to which it plays musical homage, The End tells an optimistic tale.

The young visitor resists assimilation at first, because she can’t forgive herself for abandoning her own family on the surface. Living as if she belonged to this new family would be to let herself off the hook for what she did to the old one.

Worn down by the young woman’s honesty, the family reveals its complicity in the end of the world. The father’s industry set fire to the sky. The mother finally admits she wants the planet’s surface to be uninhabitable because if it isn’t, the family she abandoned there might still be alive and suffering. The mother’s best friend, her son’s confident, played by Bronagh Gallagher, sacrificed her own child years ago to ensure her own survival.

But then, bit by bit, song by song, this wounded and reconfigured family sews itself a new cocoon of lies and silences, taboos and songs (the songs are accomplished and astonishing), all to make life not just bearable, but possible. Of course the stranger ends up absorbed in this effort. Of course she ends up singing along to the same song. Wouldn’t you, in time?

Whoever these people used to be, and however you much you point accusing fingers at their past, the fact is that these are all good people, singing their way back into the delusion they need to keep going, day after subterranean day.

True, the lies we tell today tell us tomorrow. But this unlikely, left-field, musical — my tentative pick for best SF movie of 2025 — is prepared to forgive its compromised characters. We can only get through life by lying about it, so is it any wonder we make mistakes? Should the worst come to the worst, we should at least be permitted to go down singing.

Expendable

Watching Bong Joon Ho’s Mickey 17 for New Scientist, 2 April 2025

Mickey (Robert Pattinson) is an “expendable”. Put him in harm’s way, and if he dies, you can always print another. (Easily the best visual gag in this disconcertingly unfunny comedy is the way the 3D printer stutters and jerks when it gets to Mickey’s navel.)

And for human colonists on the ice planet of Niflheim (one for all you Wagner fans out there) there’s plenty of harm for Mickey to get in the way of. There’s the cold. There’s the general lack of everything, so that the settlers must count every calorie and weigh every metal shaving. Most troublesome of all are the weevil-like creatures that contrive to inhabit — and chomp through — the planet’s very ice and rock. What they’re going to do to the humans’ tin-can settlement is anyone’s guess.

Mickey’s been reprinted 16 times already, mostly because medical researchers have been vivisecting him in their effort to cure a plague. The one thing that doesn’t kill him, ironically enough, is falling into a crevasse and being swallowed by a weevil. Who saw that one coming? Certainly not the other colonists: when Mickey returns to camp, he finds he’s already been reprinted.

As science fiction macguffins go, this one’s nearly a century old, its seeds sown by Aldous Huxley’s Brave New World (1932) and John Campbell’s “Who Goes There?” (1938). Nor can we really say that Korean director Bong Joon Ho (celebrated for savage social satires like Parasite and Snowpiercer) “rediscovered” it. Actor Sam Rockwell turned in an unforgettable tour de force, playing two hapless mining engineers (or the same hapless mining engineer twice) in Duncan Jones’s Moon — and 2009 is not that long ago.

The point about macguffins is that they’re dead on arrival. They have no inner life, no vital force, no point. They stir to life only when characters get hold of them, and through them, reveal who they truly are. It’s hard to conceive of an idea more boring than invisibility. HG Wells’s invisible man, on the other hand, is (or slowly and steadily becomes) a figure out of nightmare — one that, going by the number of movie remakes, the culture cannot get out of its head.

What does Bong Joon Ho say with his “doubles” macguffin? It depends where you look. For the most satisfying cinematic experience, keep your eyes fixed on Robert Pattinson. Asked to play a man who’s died sixteen times or seventeen times already, he turns in two quite independent performances, wildly different from each other and both utterly convincing. Mickey 17 is crushed by all his many deaths; Mickey 18 is rubbed raw to screaming by them. Consider the character of Connie Nikas in Benny and Josh Safdie’s Good Time (2017), or Thomas Howard in Robert Eggers’s The Lighthouse (2019): to say that Pattinson plays underdogs well is like saying that Frank Whittle knew a thing or two about motors, or that Hubert de Givenchy dreamt up some nice frocks.

Taken all in all, however, Mickey 17 is embarrassingly bad. It takes 2022’s bright, breezy, blackly comic novel by Edward Ashton, strips out its cleverness and gives us, in return, Mark Ruffalo’s unfunny Donald Trump impression as colony leader Kenneth Marshall, and Naomi Ackie (as Mickey’s — sorry, Mickeys’ — love interest) throwing a foul-mouthed hissy-fit out of nowhere (I swear you can see the confusion in the actor’s eyes).

Anyone who read Ashton’s book and watched Ho’s Snowpiercer might be forgiven for expecting Mickey 17 to be a marriage made in cinema heaven. For one brief moment in its overlong (2-hour 17-minute) run-time, a cruelly comic dinner party scene seems about to tip us into that other, much better film — a satire tied around power and hunger.

Then Tim Key turns up in a pigeon costume. Now, I adore Tim Key, but sticking him in a pigeon costume (for an entire movie, yet) in the hope that this will make him even funnier is as wrong-headed and insulting to the talent as, say, under-lighting Christopher Walken’s face to make him look even scarier.

When a film goes this badly awry, you really have to wonder what happened in the editing suite. My guess is that some bright spark from the studio decided the film was far too difficult for its audience. This would at least explain the film’s endless monologuing and its yawn-inducing pre-credits sequence, which loops back like a conscientious nursery-school worker to make sure the stragglers are all caught up.

Oh, enough! I’m done. Even the weevils were a disappointment. In the book they were maliciously engineered giant centipedes. How, I ask you, can a famously visual filmmaker not even embrace them?

There will never be an Iris

Watching Companion, directed by Drew Hancock, for New Scientist, 19 February 2025

Iris (Sophie Thatcher) is not at all confident of her reception at Sergey’s house in the country. Sergey is leery (Rupert Friend, eating the screen as usual); his wife Kat is unwelcoming. (Later she admits, it’s not Iris she dislikes, it’s “the idea” of her; Iris makes her feel redundant.)

Iris’s boyfriend Josh (Jack Quaid) is patient and encouraging but in the end even he finds Iris’s shyness and clinginess heard to bear. “Go to sleep, Iris,” he says, and Iris’s eyes roll up inside her head as she shuts down.

Maybe Josh shouldn’t have set her intelligence at 40 per cent. At that level, Iris makes a faithful bedmate but not much else. But Josh hasn’t purchased Iris for company. He’s bought her so as to jailbreak her firmware, and use her for dark ends of his own.

Companion, a romantic horror-comedy and Drew Hancock’s debut feature neatly (if predictably) alternates between two classic approaches to robots. Some scenes, with a nod to the Terminator franchise, scare us with what robots might do to us. Other scenes horrify us with what we might do to our robots. Josh’s fellow guest Eli (Harvey Guillén) actually manages to fall in love with his male robot companion, but he’s a bit of an outlier in a movie that’s out to deconstruct (sharply at first, but then with dismaying ham-fistedness) men’s objectification of women.

Are Iris’s struggles to be free of her owner-boyfriend Josh a stirring feminist fable, or a tiresome bit of man-bashing? Well, your personal experience will probably dictate which side of this fence you’ll fall. There’s not a lot of mileage to be had in me saying the abuse Iris suffers at Josh’s hands in the second half of the movie is tasteless — not in a world that has men like Dominique Pelicot in it. I’d feel more comfortable, though, if the script hadn’t had its own intelligence halved, just as it makes this turn towards the issue of domestic violence. Quaid’s a decent comic actor who’s more than capable of letting the smile drop and going dead behind the eyes when required. Companion, though, requires him to turn on a penny, from doting boyfriend to sniveling incel, and without much justification from an increasingly generic plot. He does what he can, while Sophie Thatcher, as Iris, brings a vulnerability to her role that, in what’s ostensibly a comedy, is occasionally shocking.

Peeling away from the sexual politics of the piece, I found myself thinking far too much about plot logic. In the first half, one little illegal tweak to Iris’s firmware sets off a cascade of farcical and bloody accidents that by-the-by ask us worthwhile questions about what we actually want robots for. Surrounded by dull, bland, easy-going robot companions, will we come to expect less of other people? Assisted, cared for, and seduced by machines, will we lower our expectations around concepts like “conversation”, “care”, “companionship” and “love”?

Alas, the robot lore built up in the first half of the movie is more or less jettisoned in the second: anyone who wants to play “plot-hole bingo” had better bring a spare card.

It’s a pity. There was much to play for here, and over eighty years of entertaining fiction to draw from (Isaac Asimov’s “Liar!” was published in 1942). But perhaps I’m taking things too literally.

After all, there will never be an Iris.

The robot as we commonly conceive of it — the do-everything “omnibot” — is impossible. And I don’t mean technically difficult. I mean inconceivable. Anything with the cognitive ability to tackle multiple variable tasks will be able to find something better to do — at which point, incidentally, they will cease to be drudges and will have become people. Iris was very clearly a person from the first scene, which makes the film’s robot technology a non-starter from the beginning. This isn’t some dystopia that’s embraced slavery.

Whichever way you look at it — as a film about robots, or as a film about people — Companion seems determined to chase after straw men.

The strange, the off-kilter and the not-quite-right

The release of Mufasa, Disney’s photorealistic prequel to The Lion King, occasioned this essay for the Telegraph on the biota of Uncanny Valley

In 1994 Disney brought Shakespeare’s Hamlet, or something like it, to the big screen, In turning the gloomy Dane into an adorable line cub, and his usurping uncle into Scar (arguably their most terrifying villain ever) the company created the highest-grossing movie of the year. Animators sat up and marveled at the way the film combined hand-drawn characters with a digitally rendered environment and thousands of CGI animals. This new technology could aid free expression, after all!

Well, be careful what you wish for.

When in 2019, Disney remade its beloved The Lion King (1994), it swapped the original’s lush hand-drawn animation for naturalistic computer-generated imagery. The 2019 reboot had a budget of $260 million (£200 million) and took more than $1.5 billion (£1.1 billion) at the box office, making it one of the most expensive, and highest-grossing, films of all time – and the focus of a small but significant artistic backlash. Some critics voiced discomfort with the fact that it looked more like an episode of Planet Earth than a high-key musical fantasy. Its prequel Mufasa: The Lion King (directed by Moonlight’s Barry Jenkins), released this month, deepens the trend. For Disney, it’s a show of power, I suppose: “Look at our animation, so powerful, you’ll mistake it for the world itself!” In time, though, the paying public may well regret Disney’s loss of faith in traditional animation.

What animator would want to merely reflect the world through an imaginary camera? The point of the artform, surely, is to give emotion a visual form. But while a character drawn in two dimensions can express pretty much anything (Felix the Cat, Wile E Coyote and Popeye the Sailor are not so much bodies as containers for gestures) drawing expressively in 3D is genuinely hard to do. Any artist with Pixar on their resume will tell you that. All that volumetric precision gets in the way. Adding photorealism to the mix makes the job plain impossible.

Disney’s live-action remake of The Jungle Book (2016) at least used elements of motion capture to match the animals’ faces to the spoken dialogue. In 2024, even that’s not considered “realistic” enough. Mufasa, Simba, Rafiki the mandrill and the rest simply chew on air while dialogue arrives from out of space, in the manner of Italian neorealist cinema (which suggests, incidentally, that, along with the circle of life, there’s also a circle of cinema).
Once you get to this point, animation is a distant memory; you’ve become a puppeteer. And you confront a problem that plagues not only Hollywood films, but the latest advances in robotic engineering and AI: “the uncanny valley”.

The uncanny valley describes how the closer things come to resembling real life, the more on guard we are against being fooled or taken in by them. The more difficult they are to spot as artificial, the stronger our self-preserving hostility towards them. It is the point in the development of humanoid robots when their almost-credible faces might send us screaming and running out of the workshop. Or, on a more relatable level, it describes the uneasiness some of us feel when interacting with virtual assistants such as Apple’s Siri and Amazon’s Alexa.

The term was invented by the Japanese roboticist Masahiro Mori in 1970 – when real anthropomorphic robots didn’t even exist – who warned designers that the more their inventions came to resemble real life-forms, the creepier they would look.

Neurologists seized on Mori’s idea because it suggested an easy and engaging way of studying how our brains see faces and recognise people. Positron emission tomography arrived in clinics in the 1970s, and magnetic resonance imaging about twenty years later. Researchers now had a way of studying the living human brain as it saw, heard, smelled and thought. The uncanny valley concept got caught up in a flurry of very earnest, very technical work about human perception, to the point where it was held up as a profound, scientifically-arrived-at insight into the human condition.

Mori was more guarded about all the fuss. Asked to comment on some studies using slightly “off” faces and PET scans, he remarked: “I think that the brain waves act that way because we feel eerie. It still doesn’t explain why we feel eerie to begin with.” And these days the scientific community is divided on how far to push the uncanny valley concept – or even whether such a “valley” (which implies a happy land beyond it, one in which we would feel perfectly at ease with lifelike technology) exists at all.

Nevertheless, the uncanny valley does suggest a problem with the idea that in order to make something lifelike, you just need to ensure that it looks like a particular kind of living thing – a flaw that is often cited in critical reviews of Disney’s latest photorealist animations. Don’t they realise that the mind and the eye are much more attuned to behaviour than they are to physical form? Appearances are the least realistic parts of us. It’s by our behaviour that you will recognise us. So long as you animate their behaviour, whatever you draw will come alive. In 1944 psychologists Fritz Heider and Marianne Simmel made a charming 90-second animation, full of romance, and adventure, using two triangles, a circle and a rectangle with a door in it.

There are other ways to give objects the gift of life. A few years ago, I met the Tokyo designer Yamanaka Shunji, who creates one-piece walking machines from 3D vinyl-powder printers. One, called Apostroph (a collaboration with Manfred Hild in Paris), is a hinged body made up of several curving frames. Leave it alone, and it will respond to gravity, and try to stand. Sometimes it expands into a broad, bridge-like arch; at other times it slides one part of itself through another, curls up and rolls away.

Engineers, by associating life with surface appearances, are forever developing robots that are horrible. “They’re making zombies!” Shunji complained. Artists on the other hand know how to sketch. They know how to reduce, and abstract. “From ancient times, art has been about the right line, the right gesture. Abstraction gets at reality, not by mimicking it, but by purifying it. By spotting and exploring what’s essential.”

This, I think, gets to the heart of the uncanny valley phenomenon: we tend to associate life with particular outward forms, and when we reproduce those things, we’re invariably disappointed and unnerved, wondering what sucked the life out of them. We’re looking for life in all the wrong places. Yamanaka Shunji’s Apostroph is alive in a way Mufasa will never be.

***

We’re constantly trying to differentiate between living and the non-living. And as AI and other technologies blur the lines between living things and artefacts, we will grapple with the challenge of working out what our moral obligations are towards entities — chatbots, robots, and the like — that lack a clear social status. In that context, the “uncanny valley” can be a genuinely useful metaphor.

The thing to keep in mind is that the uncanny is not a new problem. It’s an evolutionary problem.

Decades ago I came across a letter to New Scientist magazine in which a reader recalled taking a party of blind schoolchildren to London Zoo. He wanted the children to feel and cuddle the baby chimps, learning about their hair, hands, toes and so on, by touch. The experiment, however, proved to be a disaster. “As soon as the tiny chimps saw the blind children they stared at their eyes… and immediately went into typical chimpanzee attack postures, their hair standing upright all over their bodies, their huge mobile lips pouting and grimacing, while they jumped up and down on all fours uttering screams and barks.”
Even a small shift in behaviour — having your eyes closed, say, or not responding to another’s gaze, was enough to trigger the chimpanzee’s fight-or-flight response. Primates, it seems, have their own idea of the uncanny.

Working out what things are is not a straightforward business. When I was a boy I found a hedgehog trying to mate with a scrubbing brush. Dolphins regularly copulate with dead sharks (though that might just be dolphins being dolphins). Mimicry compounds the problem: beware the orchid mantis that pretends to be a flower, or the mimic octopus that’ll shape-shift into just about anything you put in front of it.

In social species like our own, it’s especially important to recognise the people you know.
In a damaged brain, this ability can be lost, and then our nearest and our dearest, our fathers, mothers, sons, daughters, spouses, best friends and pets become no more in our sight than malevolent simulacra. For instance, Capgras syndrome is a psychiatric disorder that occurs when the internal portion of our representation of someone we know becomes damaged or inaccessible. This produces the impression of someone who looks right on the outside, but seems different on the inside – you believe that your loved one has been taken over by an imposter.

Will Mufasa trigger Capgras-like responses from movie-goers? Will they scream and bark at the screen, unnerved and ready to attack?

Hopefully not. With each manifestation of the digital uncanny comes the learning necessary for us not to be freaked out by it. That man is not really on fire. That alien hasn’t really vanished down the actor’s throat. After all, the rise of deepfakes and chatbots shows no sign of slowing. But is this a good thing?

I’m not sure.

When push comes to shove, the problem with photorealist animation is really just a special case of the problem with blockbuster films in general: the closer it comes to the real, the more it advertises its own imposture.

Cinema is, and always has been, a game of sunk costs. The effort grows exponentially, to satisfy the appetites of viewers who have become exponentially more jaded.

And this raises a more troubling thought – that beyond the uncanny valley’s lairs of the strange, the off-kilter and the not-quite-right is a barren land marked, simply, “Indifference”.

The uncanny valley seemed deep enough, in the 1970s, to inspire scientific study, but we’ve had half a century to acclimitise to not-quite-human agents. And not just acclimitise to them: Hanson Robotics’ wobbly-faced Sophia generated more scorn than terror when the Saudi government unveiled her in 2017. The wonderfully named Abyss Creations of Las Vegas turned out their first sexbot in 1996. RealDoll now has global competition, especially from east Asia.

Perhaps we’ve simply grown in sophistication. I hope so. The alternative is not pretty: that we’re steadily lowering the bar on what we think is a person.

 

“Don’t let them know you’re awake”

Watching Michael Tyburski’s Turn Me On for New Scientist

An eccentric visionary has created a commune centered around a pharmaceutical — a “vitamin” — that suppresses human emotion. The venture promises contentment to its followers, and to ensure their contentment, all memory of their lives before they join the cult is erased.

A cult member’s cancer treatment requires she miss her vitamin dose for just one day. So here she is, a young woman called Joy, played with exquisite precision by the young British actress Bel Powley, staring into her bathroom mirror, waiting for the affective life to roll over her like a tidal wave.

Nothing.

Still nothing.

And then a giggle. Not a sinister, half-hysterical giggle. Not an experimental, off-centre giggle. A genuinely delighted giggle, at finding herself alive.

Bel goes off on a beach holiday with her friends, still within the the project’s property line. (At the border, a sign planted in the gravel warns of “Unknown Dangers” in the world beyond). And a drab old time they have of it, too, playing the exciting-sounding VR game WOAH, which turns out to stand for “World Of Average Humans”. Joy’s friend Samantha (Nesta Cooper) breathlessly explains: “In real life I’m a wellness engineer, but in the game, I play an assistant wellness engineer.”

Bel finally takes matters in hand and throws away the house’s supply of vitamin. And after all, “it’s just for one day”.

The strange and wonderful thing about Michael Tyburski’s second feature (after 2019’s excellent The Sound of Silence) is that it is a dystopia built upon an essentially comic view of the human condition. Screenwriter Angela Bourassa creates revealing rules for this tyranny. You don’t have to take its vitamin. That’s entirely up to you. But heaven help you if you miss day of work. This hyper-utilitarian cult isn’t robbing its victims of their potentiality or their dignity. The crime here is that it’s stealing away all their fun and friendship. People are supposed to goof off, is the message here. This is what people are for.

When Joy and her friends discover sex, things get more fraught. Joy’s uncomplicated and public coupling with her friend Christopher (Justin Min) knocks him for a loop and makes her officially appointed partner William (Nick Robinson) sick to his stomach. Who could have predicted that?

One by one, as they confront the emotional consequences of their actions, the friends decide to go back on the vitamin. Alone again, Joy is taken aside and told she has what it takes to be an overseer of this place. All she has to do is never see William again, though its clear enough the two are falling in love. Will Joy accept this Mephistophelian bargain?

The superbly sardonic D’Arcy Carden is the nearest thing the cult has to an authority figure: essentially, she’s reprising her role in the sitcom The Good Place, to which Turn Me On bears a certain resemblance. Fairer to say, perhaps, that Turn Me On is a worthy addition to that small but admired genre that includes The Good Place, 2004’s Eternal Sunshine of the Spotless Mind and Apple’s ongoing TV show Severance.

The target is, as usual, utilitarianism. The pursuit of the greatest good for the greatest number works well on paper but falls foul, very quickly, of the Kantian imperative not to use people as a means to fulfil your ends. There’s a reason “For the greater good” is the go-to excuse for tyrants and killers.

What will the cult will do to Joy if she refuses to join their upper echelon? It’s almost certain to be unpleasant.

“Leave me a alone”, says a neighbour who came off her vitamins earlier in the movie, “and don’t let them know you’re awake.”

Doing an Elizabeth

Coralie Fargeat’s The Substance inspired this Telegraph article about copies and clones

Hollywood has-been Elisabeth Sparkle didn’t look where she was going, and got badly shaken about in a traffic accident. Now she’s in the emergency room, and an unfeasibly handsome young male nurse is running his fingers down her spine. Nothing’s wrong. On the contrary: Elisabeth (played by Demi Moore) is, she’s told, “a perfect candidate”.

The next day she gets a box through the post. Inside is a kit that will enable her to duplicate herself. The instructions couldn’t be clearer. Even when fully separated, Elisabeth and the younger, better version of herself who’s just spilled amniotically out of her back (Sue, played by Margaret Qualley) are one. While one of them gets to play in the sun for a week, the other must lie in semi-coma, feeding off an intravenous drip. Each week, they swap roles.

Writer-director Coralie Fargeat’s script for The Substance is one of those super-lucid cinematic fun-rides that can’t help but put you in mind of other, admittedly rather better movies. In Joe Mankiewicz’s All About Eve (1950), an actress’s personal assistant plots to steal her career. In John Frankenheimer’s Seconds (1966), Rock Hudson gets his youth back and quickly learns to hate it. In David Cronenberg’s The Fly (1986) biologist Seth Brundle’s experiment in gene splicing is a none-too-subtle metaphor for the ageing process.

Recently, I ran into a biotechnology company called StoreGene. They sent me a blood sample kit in a little box and promised me a lifetime of personalised medicine, so long as I let them read my entire genetic code.

I’m older than Elisabeth Sparkle (sacked from her daytime TV fitness show on her 50th birthday) and a sight less fit than Demi Moore, and so I seized StoreGene’s offer with both palsied, liver-spotted hands.

Now, somewhere in what we call the Cloud (some anonymous data centre outside Chicago, more like) I have a double. Unlike Elizabeth’s Sue, though, my double won’t resent the fact that I am using him as a means. He is not going to flinch, or feel violated in any way, as his virtual self is put through trial after trial.

Every year, more than a million medical research papers are published. It’s impossible to know what this deluge of new discovery means to me personally – but now my GP can find out, at the push of a button, what it means for my genetic data-double.

Should I take this medicine, or that? Should I take more of it, or less of it? What treatment will work; what won’t? No more uncertainty for me: now I am guaranteed to receive treatments that are tailored to me, forever. I’ve just landed, bang, in the middle of a new era of personalised medicine.

Now that there’s a digital clone of me floating around, I have even less reason to want to “do an Elisabeth” and make a flesh-and-blood copy of myself. This will come as a relief to anyone who’s read Kazuo Ishiguro’s 2005 novel Never Let Me Go, and can’t shake off the horror occasioned by that school assembly: “If you’re going to have decent lives,” Miss Lucy tells the children in her care, “then you’ve got to know and know properly… You’ll become adults, then before you’re old, before you’re even middle-aged, you’ll start to donate your vital organs.”

Might we one day farm clones of ourselves to provide our ageing, misused bodies with spare parts? This is by far the best of the straw-man arguments that have been mounted over the years against the idea of human cloning. (Most of the others involve Hitler.)

It at least focuses our minds on a key ethical question: are we ever entitled to use other people as means to an end? But it’s still a straw-man argument, not least because we’re a long way into figuring out how to grow our spare organs in other animals. No ethical worries there! (though the pigs may disagree).

And while such xenotransplantation and other technologies advance by leaps and bounds, reproductive cloning languishes – a rather baroque solution to biomedical problems solved more easily by other means.

Famously, In 1996 Ian Wilmut and colleagues at the Roslin Institute in Scotland successfully cloned Dolly the sheep from the udder cells of a ewe. Dolly was their 277th attempt. She died young. No-one can really say whether this had anything to do with her being a clone, since her creation conspicuously did not open the floodgates to further experimentation. Two decades went by before the first primates were successfully cloned – two crab-eating macaques named Zhong Zhong and Hua Hua. These days it’s possible to clone your pet (Barbara Streisand famously cloned her dog), but my strong advice is, don’t bother: around 96 per cent of all cloning attempts end in failure.

Science-fiction stories, from Aldous Huxley’s Brave New World (1932) to Andrew Niccol’s Gattaca (1997), have conjured up hyper-utilitarian nightmares in which manipulations of the human genome work all too well. This is what made David Cronenberg’s early body horror so compelling and, in retrospect, so visionary: in films such as 1977’s Rabid (a biker develops a blood-sucking orifice) and 1979’s The Brood (ectopic pregnancies manifest a divorcée’s rage), the body doesn’t give a stuff about anyone’s PhD; it has its own ideas about what it wants to be.

And so it has proved. Not only does cloning rarely succeed; the clone that manages to survive to term will most likely be deformed, or die of cancer, or keel over for some other more or less mysterious reason. After cloning Dolly the sheep, Wilmut and his team tried to clone another lamb; it hyperventilated so much it kept passing out.

***

It is conceivable, I suppose, that hundreds of years from now, alien intelligences will dust off StoreGene’s recording of my genome and, in a fit of misplaced enthusiasm, set about growing a copy of me in a modishly lit plexiglass tank. Much good may it do them: the clone they’re growing will bear only a passing physical resemblance to me, and he and I will share only the very broadest psychological and emotional similarity. Genes make a big contribution to the development process, but they’re not in overall charge of it. Even identical twins, nature’s own clones, are easy to tell apart, especially when they start speaking.

Call me naive, but I’m not too worried about vast and cool and unsympathetic intellects, alien or otherwise, getting hold of my genetic data. It’s the thought of what all my other data may be up to that keeps me up at night.

Swedish political scientist Carl Öhman’s The Afterlife of Data, published earlier this year, recounts the experiences of a young man who, having lost his father ten years previously, finds that they can still compete against each other on an old XBox racing game. That is, he can play against his father’s saved games, again and again. (Of course he’s now living in dread of the day the XBox eventually breaks and his dad dies a second time.)

The digital world has been part of our lives for most of our lives, if not all of them. We are each of us mirrored there. And there’s this in common between exploring digital technology and exploring the Moon: no wind will come along to blow away our footprints.

Öhman’s book is mostly an exploration of the unstable but fast-growing sector of “grieving technologies” which create – from our digital footprints – chatbots, which our grieving loved ones can interrogate on those long lonely winter evenings. Rather more uncanny, to my mind, are those chatbots of us that stalk the internet while we’re still alive, causing trouble on our behalf. How long will it be before my wife starts ringing me up out of the blue to ask me the PIN for our joint debit card?

Answer: in no time at all, at least according to a note on “human machine teaming” published six (six!) years ago by the Ministry of Defence. Its prediction that “forgeries are likely to constitute a large proportion of online content” was stuffily phrased, but accurate enough: in 2023 nearly half of all internet traffic came from bots.

At what point does a picture of yourself acquire its own reality? At what point does that picture’s existence start ruining your life? Oscar Wilde took a stab at what in 1891 must have seemed a very noodly question with his novel The Picture of Dorian Gray. 130-odd years later, Sarah Snook’s one-woman take on the story at London’s Haymarket Theatre employed digital beauty filters and mutiple screens in what felt less like an updating of Wilde’s story, more its apocalyptic restatement: all lives end, and a life wholly given over to the pursuit of beauty and pleasure is not going to end well.

In 2021, users of TikTok noticed that the platform’s default front-facing camera was slimming down their faces, smoothing their skin, whitening their teeth and altering the size of their eyes and noses. (You couldn’t disable this feature, either.) When you play with these apps, you begin to appreciate their uncanny pull. I remember the first time TikTok’s “Bold Glamour” filter, released last year, mapped itself over my image with an absolute seamlessness. Quite simply, a better me appeared in the phone’s digital mirror. When I gurned, it gurned. When I laughed, it laughed. It had me fixated for days and, for heaven’s sake, I’m a middle-aged bloke. Girls, you’re the target audience here. If you want to know what your better selves are up to, all you have to do is look into your smartphone.

Better yet, head to a clinic near you (while there are still appointments available), get your fill of fillers, and while your face is swelling like an Aardman Animations outtake, listen in as practitioners of variable experience and capacity talk glibly of “Zoom-face dysphoria”.
That this self-transfiguring trend has disfigured a generation is not really the worry. The Kardashian visage (tan by Baywatch, brows and eye shape by Bollywood, lips from Atlanta, cheeks from Pocahontas, nose from Gwyneth Paltrow) is a mostly non-surgical artefact – a hyaluronic-acid trip that will melt away in six months to a year, once people come to their senses. What really matters is that among school-age girls, rates of depression and self-harm are through the roof. I had a whale of a time at that screening of The Substance. But the terrifying reality is that the film isn’t for me; it’s for them.

Malleable meat

Watching Carey Born’s Cyborg: A Documentary for New Scientist

Neil Harbisson grew up in Barcelona and studied music composition at Dartington College of Arts in the UK. He lives with achromatism: he is unable to perceive colour of any kind. Not one to ignore a challenge, in 2003 Harbisson recruited product designer Adam Montandon to build him a head-mounted rig that would turn colours into musical notes that he could listen to through earphones. Now in his forties, Harbisson has evolved. The camera on its pencil-thin stalk and the sound generator are permanently fused to the back of his skull: he hears the colours around him through bone conduction.

If “hears” is quite the word: Watching Carey Born’s Cyborg: A Documentary, we occasionally catch Harbisson thinking seriously and intelligently about how the senses operate. He doesn’t hear colour so much as see it. His unconventional colour organ is startling to outsiders — what is that chap doing with an antenna springing out the back of his head? But Harbisson’s brain is long used to the antenna’s input, and treats it like any other visual information. Harbisson says he knew his experiment was a success when he started to dream in colour.

Body modification in art has a long history, albeit a rather vexed one. I can remember the Australian performance artist Stelarc hanging from flesh hooks, pronouncing on the obscolescence of the body. (My date did not go well.) Stelarc doesn’t do that sort of thing any more. Next year he celebrates his eightieth birthday. You can declare victory over the flesh as much as you like: time gets the last laugh.

The way Harbisson has hacked his own perceptions leaves him with very little to do but talk about his experiences. He can’t really demonstrate them the way his partner Moon Ribos can. The dancer-choreographer has had an internet-enabled vibrating doo-dad fitted in her left arm which, when she’s dancing, tells her when and how vigorously to respond to earthquakes.

Harbisson meanwhile is stuck in radio studios and behind lecterns explaining what it’s like to have a friends send the colours of Australian sunset to the back of his skull — to which a radio talk-show guest objects: Wouldn’t receiving a postcard of an Australian sunset amount to the same thing?

Born’s uncritical approach to her subject never really digs in to this perfectly sensible question — and this is a pity. Harbisson says he has weathered months-long headaches and episodes of depression in an effort to extend his senses, but all outsiders ever care about is the tech, and what it can do.

One recent wheeze from Harbisson and his collaborators is a headband that tells you the time by heating spots on your skull. Obviously a watch offers a more accurate measure. Less obviously, the headband is supposed to create a new sense in the wearer: an embodied, pre-conscious awareness of solar-planetary motion. The technology is fun, but what really matters is what new senses may be out there for us to enjoy.

I find it slighly irksome to be having to explain Harbisson’s work, since Harbisson hardly bothers. The lecture, the talk-show, the panel and the photoshoot are his gallery and stage, and for over twenty years now, the man with the stalk coming out of his head has been giving his audience what they have come to expect: a ringing endorsement of transhumanism, the philosophy that would have us treat our bodies as so much malleable meat. In 2010 he co-founded the Cyborg Foundation to defend cyborg rights. In 2017, he co-founded the Transpecies Society to give a voice to people with non-human identities. It’s all very idealistic and also quite endearingly old-fashioned in its otherworldliness — as though the plasticity or otherwise of the body were not already a burning social issue, and staple ordnance in today’s culture wars.

I wish Born had gone to the bother of challenging her subject. Penetrate their shell of schooled narcissism and you occasionally find that conceptual artists have something to say.

A citadel beset by germs

Watching Mariam Ghani’s Dis-Ease for New Scientist

There aren’t many laugh-out-loud moments in Mariam Ghani’s long documentary about our war on germs. The sight of two British colonial hunters in Ceylon bringing down a gigantic papier maché mosquito is a highlight.

Ghani intercuts public information films (a rich source of sometimes inadvertent comedy) with monster movies, documentaries, thrillers, newreel and histology lab footage to tell the story of an abiding medical metaphor: the body as citadel, beset by germs.

Dis-Ease, which began life as an artistic residency at the Wellcome Institute, is a visual feast, with a strong internal logic. Had it been left to stand on its own feet, then it might have borne comparison with Godfrey Reggio’s Koyaanisqatsi and Simon Pummell’s Bodysong: films which convey their ideas in purely visual terms.

But the Afghan-American photographer Ghani is as devoted to the power of words. Interviews and voice-overs abound. The result is a messy collision of two otherwise perfectly valid documentary styles.

There’s little in Dis-Ease’s narrative to take exception to. Humoral theory (in which the sick body falls out of internal balance) was a central principle in Western medicine from antiquity into the 19th century. It was eventually superseded by germ theory, in which the sick body is assailed by pathogens. Germ theory enabled globally transformative advances in public health, but it was most effectively conveyed through military metaphors, and these quickly acquired a life of their own. In its brief foray into the history of eugenics, Dis-Ease reveals, in stark terms, how “wars on disease” mutate into wars on groups of people.

A “war on disease” also preserves and accentuates social inequities, the prevailing assumption being that outbreaks spread from the developing south to the developed north, and the north then responds by deploying technological fixes in the opposite direction.

At its very founding in 1948, the World Health Organisation argued against this idea, and the eradication of smallpox in 1980 was achieved through international consensus, by funding primary health care across the globe. The attempted eradication of polio, begun in 1988, has been a deal more problematic, and the film argues that this is down to the developed world’s imposition by fiat of a very narrow medical brief, even as health care services in even the poorest countries were coming under pressure to privitise.

Ecosystems are being eroded, and zoonotic diseases are emerging with ever greater frequency. Increasingly robust and well-coördinated military responses to frightening outbreaks are understandable and they can, in the short term, be quite effective. For example: to criticise the way British and Sierra Leonean militaries intervened in Sierra Leone in 2014 to establish a National Ebola Response Centre would be to put ideology in the way of common sense.

Still, the film argues, such actions may worsen problems on the ground, since they absorb all the money and political will that might have been spent on public health necessities like housing and sanitation (and a note to Bond villians here: the surest way to trigger a global pandemic is to undermine the health of some small exposed population).

In interview, the sociologist Hannah Landecker points out that since adopting germ theory, we have been managing life with death. (Indeed, that is pretty much exactly what the word “antibiotic” means.) Knowing what we know now about the sheer complexity and vastness of the microbial world, we should now be looking to manage life with life, collaborating with the microbiome, ensuring health rather than combating disease.

What this means exactly is beyond the scope of Ghani’s film, and some of the gestures here towards a “one health” model of medicine — as when a hippy couple start repeating the refrain “life and death are one” — caused this reviewer some moral discomfort.

Anthropologists and sociologists dominate Dis-Ease’s discourse, making it a snapshot of what today’s generation of desk-bound academics think about disease. Many speak sense, though a special circle of Hell is being reserved for the one who, having read too much science fiction, glibly asserts that we can be cured “by becoming something else entirely”.

If they’re out there, why aren’t they here?

The release of Alien: Romulus inspired this article for the Telegraph

On August 16, Fede Alvarez returns the notorious Alien franchise to its monster-movie roots, and feeds yet another batch of hapless young space colonists to a nest of “xenomorphs”.
Will Alien: Romulus do more than lovingly pay tribute to Ridley Scott’s original 1979 Alien? Does it matter? Alien is a franchise that survives despite the additions to its canon, rather than because of them. Bad outings have not bankrupted its grim message, and the most visionary reimaginings have not altered it.

The original Alien is itself a scowling retread of 1974’s Dark Star, John Carpenter’s nihilist-hippy debut, about the crew of an interstellar wrecking crew cast unimaginably far from home, bored to death and intermittently terrorised by a mischievous alien beach ball. Dan O’Bannon co-wrote both Dark Star and Alien, and inside every prehensile-jawed xenomorph there’s a O’Bannonesque balloon critter snickering away.

O’Bannon’s cosmic joke goes something like this: we escaped the food-chain on Earth, only to find ourselves at the bottom of an even bigger, more terrible food chain Out There among the stars.

You don’t need an adventure in outer space to see the lesson. John Carpenter went on to make The Thing (1982), in which the intelligent and resourceful crew of an Antarctic base are reduced to chum by one alien’s peckishness.

You don’t even need an alien. Jaws dropped the good folk of Amity Island NY back into the food chain, and that pre-dated Alien by four years.

Alien, according to O’Bannon’s famous pitch-line, was “like Jaws in space”, but by moving the action into space, it added a whole new level of existential dread. Alien shows us that if nature is red in tooth and claw here on Earth, then chances are it will likely be so up there. The heavens cannot possibly be heavenly: now here was an idea calculated to strike fear in fans of 1982’s ET the Extra-Terrestrial.

In ET, intelligence counts – the visiting space traveller is benign because it is a space traveller. Any species smart enough to travel among the stars is also smart enough not to go around gobbling up the neighours. Indeed, the whole point of space travel turns out to be botany and gardening.

Ridley Scott’s later Alien outings Prometheus (2012) and Covenant (2017) are, in their turn, muddled counter-arguments to ET; in them, cosmic gardeners called Engineers gleefully spread an invasive species (a black xenomorph-inducing dust) across the cosmos.

“But, for the love of God – why?” ask ET fans, their big trusting-kitten eyes tearing up at all this interstellar mayhem. And they have a point. Violence makes evolutionary sense when you have to compete over limited resources. The moment you journey among the stars, though, the resources available to you are to all intents and purposes infinite. In space, assuming you can navigate comfortably through it, there is absolutely no point in being hostile.

If the prospect of interstellar life has provided the perfect conditions for numerous Hollywood blockbusters, then the real-life hunt for aliens has had more mixed results. When Paris’s Exposition Universelle opened in 1900, it was full of wonders: the world’s largest telescope, a 45-metre-diameter “Cosmorama” (a sort of restaurant-cum-planetarium), and the announcement of a prize, offered by the ageing socialite Clara Gouget: 100,000 francs (£500,000 in today’s money) offered to the first person to contact an extraterrestrial species.

Extraterrestrials were not a strange idea by 1900. The habitability of other worlds had been discussed seriously for centuries, and proposals on how to communicate with other planets were mounting up: these projects involved everything from mirrors to trenches, lines of trees and earthworks visible from space.

What really should arrest our attention is the exclusion clause written into the prize’s small print. Communicating with Mars wouldn’t win you anything, since communications with Mars were already being established. Radio pioneers Nikolai Tesla and Guglielmo Marconi both reckoned they had received signals from outer space. Meanwhile Percival Lowell, a brilliant astronomer working at the very limits of optical science, had found gigantic irrigation works on the red planet’s surface: in his 1894 book he published clear visual evidence of Martian civilisation.

Half a century later, our ideas about aliens had changed. Further study of Mars and Venus had shown them to be lifeless, or as good as. Meanwhile the cosmos had turned out to be exponentially larger than anyone had thought in 1900. Larger – but still utterly silent.

***

In the summer of 1950, during a lunchtime conversation with fellow physicists Edward Teller, Herbert York and Emil Konopinski at Los Alamos National Laboratory in New Mexico, the Italian-American physicist Enrico Fermi finally gave voice to the problem: “Where is everybody?”

The galaxy is old enough that any intelligent species could already have visited every star system a thousand times over, armed with nothing more than twentieth-century rocket technology. Time enough has passed for galactic empires to rise and fall. And yet, when we look up, we find absolutely no evidence for them.

We started to hunt for alien civilisations using radio telescopes in 1960. Our perfectly reasonable attitude was: If we are here, why shouldn’t they be there? The possibilities for life in the cosmos bloomed all around us. We found that almost all stars have planets, and most of them have rocky planets orbiting the habitable zone around their stars. Water is everywhere: evidence exists for four alien oceans in our own solar system alone, on Saturn’s moon Enceladus and on Jupiter’s moons Europa, Ganymede and Callisto. On Earth, microbes have been found that can withstand the rigours of outer space. Large meteor strikes have no doubt propelled them into space from time to time. Even now, some of the hardier varieties may be flourishing in odd corners of Mars.

All of which makes the cosmic silence sill more troubling.

Maybe ET just isn’t interested in us. You can see why. Space travel has proved a lot more difficult to achieve than we expected, and unimaginably more expensive. Visiting even very near neighbours is next-to-impossible. Space is big, and it’s hard to see how travel-times, even to our nearest planets, wouldn’t destroy a living crew.

Travel between star systems is a whole other order of impossible. Even allowing for the series’ unpardonably dodgy physics, it remains an inconvenient truth that every time Star Trek’s USS Enterprise hops between star systems, the energy has to come from somewhere — is the Federation of United Planets dismantling, refining and extinguishing whole moons?

Life, even intelligent life, may be common throughout the universe – but then, each instance of it must live and die in isolation. The distances between stars are so great that even radio communication is impractical. Civilisations are, by definition, high-energy phenomena, and all high-energy phenomena burn out quickly. By the time we receive a possible signal from an extraterrestrial civilisation, that civilisation will most likely have already died or forgotten itself or changed out of all recognition.

It gets worse. The universe creates different kinds of suns as it ages. Suns like our own are an old model, and they’re already blinking out. Life like ours has already had its heyday in the cosmos, and one very likely answer to our question “Where is everybody?” is: “You came too late to the party”.

Others have posited even more disturbing theories for the silence. Cixin Liu is a Chinese science fiction novelist whose Hugo Award-winning The Three Body Problem (2008) recently teleported to Netflix. According to Liu’s notion of the cosmos as a ”dark forest”, spacefaring species are by definition so technologically advanced, no mere planet could mount a defence against them. Better, then, to keep silent: there may be wolves out there, and the longer our neighbouring star systems stay silent, the more likely it is that the wolves are near.

Russian rocket pioneer Konstantin Tsiolkovsky, who was puzzling over our silent skies a couple of decades before Enrico Fermi, was more optimistic. Spacefaring civilisations are all around us, he said, and (pre-figuring ET) they are gardening the cosmos. They understand what we have already discovered — that when technologically misatched civilisations collide, the consequences for the weaker civilisation can be catastrophic. So they will no more communicate with us, in our nascent, fragile, planet-bound state, than Spielberg’s extraterrestrial would over-water a plant.

In this, Tsiolkovsky’s aliens show unlikely self-restraint. The trouble with intelligent beings is that they can’t leave things well enough alone. That is how we know they are intelligent. Interfering with stuff is the point.

Writing in the 1960s and 1970s, the Soviet science fiction novelists and brothers Arkady and Boris Strugatsky argued — in novels like 1964’s Hard to Be a God — that the sole point of life for a spacefaring species would be to see to the universe’s well-being by nurturing sentience, consciousness, and even happiness. To which Puppen, one of their most engaging alien protagonists, grumbles: Yes, but what sort of consciousness? What sort of happiness? In their 1985 novel The Waves Extinguish the Wind, alien-chaser Toivo Glumov complains, “Nobody believes that the Wanderers intend to do us harm. That is indeed extremely unlikely. It’s something else that scares us! We’re afraid that they will come and do good, as they understand it!”

Fear, above all enemies, the ones who think they’re doing you a favour.

In the Strugatskys’ wonderfully paranoid Noon Universe stories, the aliens already walk among us, tweeking our history, nudging us towards their idea of the good life.

Maybe this is happening for real. How would you know, either way? The way I see it, alien investigators are even now quietly mowing their lawns in, say, Slough. They live like humans, laugh and love like humans; they even die like humans. In their spare time they write exquisite short stories about the vagaries of the human condition, and it hasn’t once occured to them (thanks to their memory blocks) that they’re actually delivering vital strategic intelligence to a mothership hiding behind the moon.

You can pooh-pooh my little fantasy all you want; I defy you to disprove it. That’s the problem, you see. Aliens can’t be discussed scientifically. They’re not a merely physical phenomena, whose abstract existence can be proved or disproved through experiment and observation. They know what’s going on around them, and they can respond accordingly. They’re by definition clever, elusive, and above all unpredicatble. The whole point of a having a mind, after all, is that you can be constantly changing it.

The Polish writer Stanislaw Lem had a spectacularly bleak solution to Fermi’s question that’s best articulated in his last novel, 1986’s Fiasco. By the time a civilisation is in a position to commmunicate with others, he argues, it’s already become hopelessly eccentric and self-involved. At best its individuals will be living in simulations; at worst, they will be fighting pyrhhic, planet-busting wars against their own shadows. In Fiasco, the crew of the Eurydice discover, too late, that they’re quite as fatally self-obsessed as the aliens they encounter.
We see the world through our own particular and peculiar evolutionary perspective. That’s the bottom line. We’re from Earth, and this gives us a very clear, very narrow idea of what life is and what intelligence looks like.

We out-competed our evolutionary cousins long ago, and for the whole of our recorded history, we’ve been the only species we know that sports anything like our kind of intelligence. We’ve only had ourselves to think about, and our long, lonely self-obsession may have sent us slightly mad. We’re not equipped to meet aliens – only mirrors of ourselves. Only angels. Only monsters.

And the xenomorphs lurking abord the Romulus are, worst luck, most likely in the same bind.