Doing an Elizabeth

Coralie Fargeat’s The Substance inspired this Telegraph article about copies and clones

Hollywood has-been Elisabeth Sparkle didn’t look where she was going, and got badly shaken about in a traffic accident. Now she’s in the emergency room, and an unfeasibly handsome young male nurse is running his fingers down her spine. Nothing’s wrong. On the contrary: Elisabeth (played by Demi Moore) is, she’s told, “a perfect candidate”.

The next day she gets a box through the post. Inside is a kit that will enable her to duplicate herself. The instructions couldn’t be clearer. Even when fully separated, Elisabeth and the younger, better version of herself who’s just spilled amniotically out of her back (Sue, played by Margaret Qualley) are one. While one of them gets to play in the sun for a week, the other must lie in semi-coma, feeding off an intravenous drip. Each week, they swap roles.

Writer-director Coralie Fargeat’s script for The Substance is one of those super-lucid cinematic fun-rides that can’t help but put you in mind of other, admittedly rather better movies. In Joe Mankiewicz’s All About Eve (1950), an actress’s personal assistant plots to steal her career. In John Frankenheimer’s Seconds (1966), Rock Hudson gets his youth back and quickly learns to hate it. In David Cronenberg’s The Fly (1986) biologist Seth Brundle’s experiment in gene splicing is a none-too-subtle metaphor for the ageing process.

Recently, I ran into a biotechnology company called StoreGene. They sent me a blood sample kit in a little box and promised me a lifetime of personalised medicine, so long as I let them read my entire genetic code.

I’m older than Elisabeth Sparkle (sacked from her daytime TV fitness show on her 50th birthday) and a sight less fit than Demi Moore, and so I seized StoreGene’s offer with both palsied, liver-spotted hands.

Now, somewhere in what we call the Cloud (some anonymous data centre outside Chicago, more like) I have a double. Unlike Elizabeth’s Sue, though, my double won’t resent the fact that I am using him as a means. He is not going to flinch, or feel violated in any way, as his virtual self is put through trial after trial.

Every year, more than a million medical research papers are published. It’s impossible to know what this deluge of new discovery means to me personally – but now my GP can find out, at the push of a button, what it means for my genetic data-double.

Should I take this medicine, or that? Should I take more of it, or less of it? What treatment will work; what won’t? No more uncertainty for me: now I am guaranteed to receive treatments that are tailored to me, forever. I’ve just landed, bang, in the middle of a new era of personalised medicine.

Now that there’s a digital clone of me floating around, I have even less reason to want to “do an Elisabeth” and make a flesh-and-blood copy of myself. This will come as a relief to anyone who’s read Kazuo Ishiguro’s 2005 novel Never Let Me Go, and can’t shake off the horror occasioned by that school assembly: “If you’re going to have decent lives,” Miss Lucy tells the children in her care, “then you’ve got to know and know properly… You’ll become adults, then before you’re old, before you’re even middle-aged, you’ll start to donate your vital organs.”

Might we one day farm clones of ourselves to provide our ageing, misused bodies with spare parts? This is by far the best of the straw-man arguments that have been mounted over the years against the idea of human cloning. (Most of the others involve Hitler.)

It at least focuses our minds on a key ethical question: are we ever entitled to use other people as means to an end? But it’s still a straw-man argument, not least because we’re a long way into figuring out how to grow our spare organs in other animals. No ethical worries there! (though the pigs may disagree).

And while such xenotransplantation and other technologies advance by leaps and bounds, reproductive cloning languishes – a rather baroque solution to biomedical problems solved more easily by other means.

Famously, In 1996 Ian Wilmut and colleagues at the Roslin Institute in Scotland successfully cloned Dolly the sheep from the udder cells of a ewe. Dolly was their 277th attempt. She died young. No-one can really say whether this had anything to do with her being a clone, since her creation conspicuously did not open the floodgates to further experimentation. Two decades went by before the first primates were successfully cloned – two crab-eating macaques named Zhong Zhong and Hua Hua. These days it’s possible to clone your pet (Barbara Streisand famously cloned her dog), but my strong advice is, don’t bother: around 96 per cent of all cloning attempts end in failure.

Science-fiction stories, from Aldous Huxley’s Brave New World (1932) to Andrew Niccol’s Gattaca (1997), have conjured up hyper-utilitarian nightmares in which manipulations of the human genome work all too well. This is what made David Cronenberg’s early body horror so compelling and, in retrospect, so visionary: in films such as 1977’s Rabid (a biker develops a blood-sucking orifice) and 1979’s The Brood (ectopic pregnancies manifest a divorcée’s rage), the body doesn’t give a stuff about anyone’s PhD; it has its own ideas about what it wants to be.

And so it has proved. Not only does cloning rarely succeed; the clone that manages to survive to term will most likely be deformed, or die of cancer, or keel over for some other more or less mysterious reason. After cloning Dolly the sheep, Wilmut and his team tried to clone another lamb; it hyperventilated so much it kept passing out.

***

It is conceivable, I suppose, that hundreds of years from now, alien intelligences will dust off StoreGene’s recording of my genome and, in a fit of misplaced enthusiasm, set about growing a copy of me in a modishly lit plexiglass tank. Much good may it do them: the clone they’re growing will bear only a passing physical resemblance to me, and he and I will share only the very broadest psychological and emotional similarity. Genes make a big contribution to the development process, but they’re not in overall charge of it. Even identical twins, nature’s own clones, are easy to tell apart, especially when they start speaking.

Call me naive, but I’m not too worried about vast and cool and unsympathetic intellects, alien or otherwise, getting hold of my genetic data. It’s the thought of what all my other data may be up to that keeps me up at night.

Swedish political scientist Carl Öhman’s The Afterlife of Data, published earlier this year, recounts the experiences of a young man who, having lost his father ten years previously, finds that they can still compete against each other on an old XBox racing game. That is, he can play against his father’s saved games, again and again. (Of course he’s now living in dread of the day the XBox eventually breaks and his dad dies a second time.)

The digital world has been part of our lives for most of our lives, if not all of them. We are each of us mirrored there. And there’s this in common between exploring digital technology and exploring the Moon: no wind will come along to blow away our footprints.

Öhman’s book is mostly an exploration of the unstable but fast-growing sector of “grieving technologies” which create – from our digital footprints – chatbots, which our grieving loved ones can interrogate on those long lonely winter evenings. Rather more uncanny, to my mind, are those chatbots of us that stalk the internet while we’re still alive, causing trouble on our behalf. How long will it be before my wife starts ringing me up out of the blue to ask me the PIN for our joint debit card?

Answer: in no time at all, at least according to a note on “human machine teaming” published six (six!) years ago by the Ministry of Defence. Its prediction that “forgeries are likely to constitute a large proportion of online content” was stuffily phrased, but accurate enough: in 2023 nearly half of all internet traffic came from bots.

At what point does a picture of yourself acquire its own reality? At what point does that picture’s existence start ruining your life? Oscar Wilde took a stab at what in 1891 must have seemed a very noodly question with his novel The Picture of Dorian Gray. 130-odd years later, Sarah Snook’s one-woman take on the story at London’s Haymarket Theatre employed digital beauty filters and mutiple screens in what felt less like an updating of Wilde’s story, more its apocalyptic restatement: all lives end, and a life wholly given over to the pursuit of beauty and pleasure is not going to end well.

In 2021, users of TikTok noticed that the platform’s default front-facing camera was slimming down their faces, smoothing their skin, whitening their teeth and altering the size of their eyes and noses. (You couldn’t disable this feature, either.) When you play with these apps, you begin to appreciate their uncanny pull. I remember the first time TikTok’s “Bold Glamour” filter, released last year, mapped itself over my image with an absolute seamlessness. Quite simply, a better me appeared in the phone’s digital mirror. When I gurned, it gurned. When I laughed, it laughed. It had me fixated for days and, for heaven’s sake, I’m a middle-aged bloke. Girls, you’re the target audience here. If you want to know what your better selves are up to, all you have to do is look into your smartphone.

Better yet, head to a clinic near you (while there are still appointments available), get your fill of fillers, and while your face is swelling like an Aardman Animations outtake, listen in as practitioners of variable experience and capacity talk glibly of “Zoom-face dysphoria”.
That this self-transfiguring trend has disfigured a generation is not really the worry. The Kardashian visage (tan by Baywatch, brows and eye shape by Bollywood, lips from Atlanta, cheeks from Pocahontas, nose from Gwyneth Paltrow) is a mostly non-surgical artefact – a hyaluronic-acid trip that will melt away in six months to a year, once people come to their senses. What really matters is that among school-age girls, rates of depression and self-harm are through the roof. I had a whale of a time at that screening of The Substance. But the terrifying reality is that the film isn’t for me; it’s for them.

A lawyer scenting blood

Reading Unwired by Gaia Bernstein for New Statesman, 15 May 2023

In 2005, the journal Obesity Research published a study that, had we but known it, told us everything we needed to know about our coming addiction to digital devices.

The paper, “Bottomless Bowls: Why Visual Cues of Portion Size May Influence Intake” was about soup. Researchers led by Brian Wansink of Cornell University invited volunteers to lunch. One group ate as much soup as they wanted from regular bowls. The other ate from bowls that were bolted to the table and refilled automatically from below. Deprived of the “stopping signal” of an empty bowl, this latter group ate 73 per cent more than the others — and had no idea that they had over-eaten.

It’s a tale that must haunt the dreams of Asa Raskin, the man who invented, then publically regretted, “infinite scroll”. That’s the way mobile phone apps (from Facebook to Instagram, Twitter to Snapchat) provide endless lists of fresh content to the user, regardless of how much content has already been consumed.

Gaia Bernstein, a law professor at Seton Hall, includes infinite scroll in her book’s catalogue of addicting smart-device features. But this is as much about what these devices don’t do. For instance in his 2022 book Lost Focus Johann Hari wonders why Facebook never tells you which of your friends are nearby and up for a coffee. Well, the answer’s obvious enough: because lonely people, self-medicating with increasing quantities of social media, are Facebook’s way of making money.

What do we mean when we say that our mobile phones and tablets and other smart devices are addicting?

The idea of behavioural addiction was enshrined in DSM-5, the manual of mental disorders issued by the American Psychiatric Association, in 2015. DSM-5 is a bloated beast, and yet its flaky-sounding “Behavioral Addictions” — that, on the face of it, could make a mental disorder of everything we like to do — have proved remarkably robust, as medicine reveals how addictions, compulsions and enthusiasms share the same neurological pathways. You can addict humans (and not just humans) to pretty much anything. All you need to do is weaponise the environment.

And the environment, according to Bernstein’s spare, functional and frightening account, is most certainly weaponised. Teenagers, says Bernsteins, spend barely a third of the time partying that they used to in the 1980s, and the number of teens who get together with their friends has halved between 2000 and 2015. If ever there was a time to market a service to lonely people by making them more lonely, it’s now.

For those of us who want to sue GAMA (Google, Amazon, Meta, Apple) for our children’s lost childhood, galloping anxiety, poor impulse control, obesity, insomnia and raised suicide risk, the challenge is to demonstrate that it’s screentime that’s done all this damage to how they feel, and how they behave. And that, in an era of helicopter-parenting, is hard to do. danah boyd’s 2014 book It’s Complicated shows how difficult it’s going to be to separate the harms inflicted by little Johnny’s iPhone from all the benefits little Johnny enjoys. To hear boyd tell it, teenagers “obsessed” with social media are simply trying to recreate, for themselves and each other, a social space denied them by anxious parents, hostile authorities, and a mass media bent on exaggerating every conceivable out-of-doors danger.

The Covid pandemic has only exacerbated the stay-at-home, see-no-one trend among young people. Children’s average time online doubled from three to six hours during lockdown. It use to be that four per cent of children spent more than eight hours a day in front of a smart screen. Now over a quarter of them do.

Nor have we merely inherited this dismal state of affairs; we’ve positively encouraged it, stuffing our schools with technological geegaws in the fond and (as it turns out) wildly naive belief that I.T. will improve and equalise classroom performance. (It doesn’t, which this is why Silicon Valley higher-ups typically send their children to Waldorf schools, which use chalk, right up until the eighth grade.)

Bernstein, who regularly peppers an otherwise quite dry account with some eye-popping personal testimony, recalls meeting one mum whose son was set to studying history through a Roblox game mode called Assassin’s Creed Odyssey (set in ancient Greece). “Since then, whenever she asks him to get off Roblox, he insists it is homework.”

Bernstein believes there’s more to all this than a series of unfortunate events. She thinks the makers of smart devices knew exactly what they were doing, as surely as the tobacco companies knew that the cigarettes they manufactured caused cancer.

Bernstein reckons we’re at a legal tipping point: this is her playbook for making GAMA pay for addicting us to glass.

Here’s what we already know about how companies respond to being caught out in massive wrong-doing.

First, they ignore the problem. (In 2018 an internal Facebook presentation warned: “Our algorithm exploits the human brain’s attraction to divisiveness… If left unchecked [it would feed users] more and more divisive content to gain user attention & increase time on the platform.” Mark Zuckerberg responded by asking his people “not to bring something like that to him again”.)

Then they deny there’s a problem. Then they go to war with the science, refuting critical studies and producing their own. Then, they fend off public criticism — and place responsibility on the consumer — by offering targeted solutions. (At least the filter tips added to cigarettes were easy to use. Most “parental controls” on smart devices are so cumbersome and inaccessible as to be unuseable.) Finally, they offer to create a system of self-regulation — by which time, Bernstein reckons, you’ve won, or you will have won, so long as you have proven that the people you’re going after intended, all along, to addict their customers.

You might, naively, imagine that this matter rests upon the science. It doesn’t, and Bernstein’s account of the screentime science wars is quite weak — a shallow confection built largely of single studies.

The scientific evidence is stronger than Bernstein makes it sound, but there’s still a problem: it’ll take a generation to consolidate. There are other, better ways to get at the truth in a timely manner; for instance, statistics, which will tell you that we have the largest ever recorded epidemic of teenage mental health problems, whose rising curves correlate with terrifying neatness with the launch of various social media platforms.

Bernstein is optimistic: “Justifying legal interventions,” she says, “is easier when the goal is to correct a loss of autonomy”, and this after all, is the main charge she’s laying at GAMA’s door: that these companies have created devices that rob us of our will, leaving us ever more civically and psychologically inept, the more we’re glued to their products.

Even better (at least from the point of view of a lawyer scenting blood), we’re talking about children. “Minors are the Achilles heel,” Bernstein announces repeatedly, and with something like glee. Remember how the image of children breathing in their parents’ second-hand smoke broke big tobacco? Well, just extend the analogy: here we have a playground full of kids taking free drags of Capstans and Players No. 6.

Unwired is not, and does not aspire to be, a comprehensive account of the screen-addiction phenomenon. It exists to be used: an agenda for social change through legal action. It is a knife, not a brush. But it’ll be of much more than academic value to those of us whose parenting years were overshadowed by feelings of guilt, frustration and anxiety, as we fought our hopeless battles, and lost our children to TikTok and Fortnite.

Reality trumped

Reading You Are Here: A field guide for navigating polarized speech, conspiracy theories, and our polluted media landscape by Whitney Phillips and Ryan M. Milner (MIT Press)
for New Scientist, 3 March 2021

This is a book about pollution, not of the physical environment, but of our civic discourse. It is about disinformation (false and misleading information deliberately spread), misinformation (false and misleading information inadvertently spread), and malinformation (information with a basis in reality spread pointedly and specifically to cause harm).

Communications experts Whitney Phillips and Ryan M. Milner completed their book just prior to the US presidential election that replaced Donald Trump with Joe Biden. That election, and the seditious activities that prompted Trump’s second impeachment, have clarified many of the issues Phillips and Milner have gone to such pains to explore. Though events have stolen some their thunder, You Are Here remains an invaluable snapshot of our current social and technological problems around news, truth and fact.

The authors’ US-centric (but universally applicable) account of “fake news” begins with the rise of the Ku Klux Klan. Its deliberately silly name, cartoonish robes, and quaint routines (which accompanied all its activities, from rallies to lynchings) prefigured the “only-joking” subcultures (Pepe the Frog and the like) dominating so much of our contemporary social media. Next, an examination of the Satanic panics of the 1980s reveals much about the birth and growth of conspiracy theories. The authors’ last act is an unpicking of QAnon — a current far-right conspiracy theory alleging that a secret cabal of cannibalistic Satan-worshippers plotted against former U.S. president Donald Trump. This brings the threads of their argument together in a conclusion all the more apocalyptic for being so closely argued.

Polluted information is, they argue, our latest public health emergency. By treating the information sphere as an ecology under threat, the authors push past factionalism to reveal how, when we use media, “the everyday actions of everyone else feed into and are reinforced by the worst actions of the worst actors”

This is their most striking takeaway: that the media machine that enabled QAnon isn’t a machine out of alignment, or out of control, or somehow infected: it’s a system working exactly as designed — “a system that damages so much because it works so well”.

This media machine is founded on principles that, in and of themselves, seem only laudable. Top of the list is the idea that to counter harms, we have to call attention to them: “in other words, that light disinfects”.

This is a grand philosophy, for so long as light is hard to generate. But what happens when the light — the confluence of competing information sets, depicting competing realities — becomes blinding?

Take Google as an example. Google is an advertising platform, that makes money the more its users use the internet to “get to the bottom of things”. The deeper the rabbit-holes go, the more money Google makes. This sets up a powerful incentive for “conspiracy entrepreneurs” to produce content, creating “alternative media echo-systems”. When the facts run out, create alternative facts. “The algorithm” (if you’ll forgive this reviewer’s dicey shorthand) doesn’t care. “The algorithm” is, in fact, designed to serve up as much pollution as possible.

What’s to be done? Here the authors hit a quite sizeable snag. They claim they’re not asking “for people to ‘remain civil’”. They claim they’re not commanding us, “don’t feed the trolls.” But so far as I could see, this is exactly what they’re saying — and good for them.

With the machismo typical of the social sciences, the authors call for “foundational, systematic, top-to-bottom change,” whatever that is supposed to mean, when what they are actually advocating is a sense of personal decency, a contempt for anonymity, a willingness to stand by what one says come hell or high water, politeness and consideration, and a willingness to listen.

These are not political ideas. These are qualities of character. One might even call them virtues, of a sort that were once particularly prized by conservatives.

Phillips and Milner bemoan the way market capitalism has swallowed political discourse. They teeter on a much more important truth: that politics has swallowed our moral discourse. Social media has made whining cowards of us all. You Are Here comes dangerously close to saying so. If you listen carefully, there’s a still, small voice hidden in this book, telling us all to grow up.

An inanimate object worshipped for its supposed magical powers

Watching iHuman dircted by Tonje Hessen Schei for New Scientist, 6 January 2021

In 2010 she made Play Again, exploring digital media addiction among children. In 2014 she won awards for Drone, about the CIA’s secret role in drone warfare.

Now, with iHuman, Tonje Schei, a Norwegian documentary maker who has won numerous awards for her explorations of humans, machines and the environment, tackles — well, what, exactly? iHuman is a weird, portmanteau diatribe against computation — specifically, that branch of it that allows machines to learn about learning. Artificial general intelligence, in other words.

Incisive in parts, often overzealous, and wholly lacking in scepticism, iHuman is an apocalyptic vision of humanity already in thrall to the thinking machine, put together from intellectual celebrity soundbites, and illustrated with a lot of upside-down drone footage and digital mirror effects, so that the whole film resembles nothing so much as a particularly lengthy and drug-fuelled opening credits sequence to the crime drama Bosch.

That’s not to say that Schei is necessarily wrong, or that our Faustian tinkering hasn’t doomed us to a regimented future as a kind of especially sentient cattle. The film opens with that quotation from Stephen Hawking, about how “Success in creating AI might be the biggest success in human history. Unfortunately, it might also be the last.” If that statement seems rather heated to you, go visit Xinjiang, China, where a population of 13 million Turkic Muslims (Uyghurs and others) are living under AI surveillance and predictive policing.

Not are the film’s speculations particularly wrong-headed. It’s hard, for example, to fault the line of reasoning that leads Robert Work, former US under-secretary of defense, to fear autonomous killing machines, since “an authoritarian regime will have less problem delegating authority to a machine to make lethal decisions.”

iHuman’s great strength is its commitment to the bleak idea that it only takes one bad actor to weaponise artificial general intelligence before everyone else has to follow suit in their own defence, killing, spying and brainwashing whole populations as they go.

The great weakness of iHuman lies in its attempt to throw everything into the argument: :social media addiction, prejudice bubbles, election manipulation, deep fakes, automation of cognitive tasks, facial recognition, social credit scores, autonomous killing machines….

Of all the threats Schei identifies, the one conspicuously missing is hype. For instance, we still await convincing evidence that Cambrdige Analytica’s social media snake oil can influence the outcome of elections. And researchers still cannot replicate psychologist Michal Kosinski’s claim that his algorithms can determine a person’s sexuality and even their political leanings from their physiology.

Much of the current furore around AI looks jolly small and silly one you remember that the major funding model for AI development is advertising. Most every millennial claim about how our feelings and opinions can be shaped by social media is a retread of claims made in the 1910s for the billboard and the radio. All new media are terrifyingly powerful. And all new media age very quickly indeed.

So there I was hiding behind the sofa and watching iHuman between slitted fingers (the score is terrifying, and artist Theodor Groeneboom’s animations of what the internet sees when it looks in the mirror is the stuff of nightmares) when it occurred to me to look up the word “fetish”. To refresh your memory, a fetish is an inanimate object worshipped for its supposed magical powers or because it is considered to be inhabited by a spirit.

iHuman’s is a profoundly fetishistic film, worshipping at the altar of a God it has itself manufactured, and never more unctiously as when it lingers on the athletic form of AI guru Jürgen Schmidhuber (never trust a man in white Levis) as he complacently imagines a post-human future. Nowhere is there mention of the work being done to normalise, domesticate, and defang our latest creations.

How can we possibly stand up to our new robot overlords?

Try politics, would be my humble suggestion.

Over-performing human

Talking to choreographer Alexander Whitley for the Financial Times,  28 February 2020

On a dim and empty stage, six masked black-clad dancers, half-visible, their limbs edged in light, run through attitude after attitude, emotion after emotion. Above the dancers, a long tube of white light slowly rises, falls, tips and circles, drawing the dancers’ limbs and faces towards itself like a magnet. Under its variable cold light, movements become more expressive, more laden with emotion, more violent.

Alexander Whitley, formerly of the Royal Ballet School and the Birmingham Royal Ballet, is six years into a project to expand the staging of dance with new media. He has collaborated with filmmakers, designers, digital artists and composers. Most of all, he has played games with light.

The experiments began with The Measures Taken, in 2014. Whitley used motion-tracking technology to project visuals that interacted with the performers’ movements. Then, dissatisfied with the way the projections obscured the dancers, in 2018 he used haze and narrowly focused bars of light to create, for Strange Stranger, a virtual “maze” in which his dancers found themselves alternately liberated and constrained.

At 70 minutes Overflow, commissioned by Sadler’s Wells Theatre, represents a massive leap in ambition. With several long-time collaborators — in particular the Dutch artist-designers Children of the Light — Whitley has worked out how to reveal, to an audience sat just a few feet away, exactly what he wants them to see.

Whitley is busy nursing Overflow up to speed in time for its spring tour. The company begin with a night at the Lowry in Salford on 18 March, before performing at Sadler’s Wells on 17 and 18 April.

Overflow, nearly two years in the making, has consumed money as well as time. The company is performing at Stereolux in Nantes in April and will need more overseas bookings if it is to flourish. “There’s serious doubt about the status of the UK and UK touring companies now,” says Whitley (snapping at my cheaply dangled Brexit bait); “I hope there’s enough common will to build relationships in spite of the political situation.”

It is easy to talk politics with Whitley (he is very well read), but his dances are anything but mere vehicles for ideas. And while Overflow is a political piece by any measure — a survey of our spiritual condition under survellance capitalism, for heaven’s sake — its effects are strikingly classical. It’s not just the tricksy lighting that has me thinking of the figures on ancient Greek vases. It’s the dancers themselves and their clean, elegant, tragedian’s gestures.

A dancer kneels, and takes hold of his head. He tilts it up into the light as it turns and tilts, inches from his face, and, in a shocking piece of trompe l’ioel — can he really be pulling his face apart?

Overflow is about our relationship to the machines that increasingly govern our lives. But there’s not a hint of regimentation here, or mechanisation. These dancers are not trying to perform machine. They’re trying to perform human.

Whitley laughs at this observation. “I guess, as far as that goes, they’re over-performing human. They’re caught up in the excitement and hyper-stimulation of their activity. Which is exactly how we interact with social media. We’re being hyperstimulated into excessive activity. Keep scrolling, keep consuming, keep engaging!”

It was an earlier piece, 2016’s Pattern Recognition, that set Whitley on the road to Overflow. “I’d decided to have the lights moving around the stage, to give us the sense of depth we’d struggled to achieve in The Measures Taken. But very few people I talked to afterwards realised or understood that our mobile stage lights were being driven by real-time tracking. They thought you could achieve what we’d achieved just through choreography. At which point a really obvious insight arrived: that interactivity is interesting, first and foremost, for the actor involved in the interaction.”

In Overflow, that the audience feels left out is no longer a technical problem: it’s the whole point of the piece. “We’re all watching things we shouldn’t be watching, somehow, through social media and the internet,” says Whitley. “That the world has become so revealed is unpleasant. It’s over-exposed us to elements of human nature that should perhaps remain private. But we’re all bound up in it. Even if we’re not doing it, we’re watching it.”

The movements of the ensemble in Overflow are the equivalent of emoji: “I was interested in how we could think of human emotions just as bits of data,” Whitley explains. In the 1980s a psychologist called Robert Plutchik stated that there were eight basic emotions: joy, trust, fear, surprise, sadness, anticipation, anger, and disgust. “We stuck pins at random into this wheel chart he invented, choosing an emotion at random, and from that creating an action that somehow embodied or represented it. And the incentive was to do so as quickly and concisely as possible, and as soon it’s done, choose another one. So the dancers are literally jumping at random between all these different human emotions. It’s not real communication, just an outpouring of emotional information.”

The solos are built using material drawn from each dancer’s movement diary. “The dancers made diary entries, which I then filmed, based on how they were feeling each day. They’re movement dairies: personal documents of their emotional lives, which I then chopped up and jumbled around and gave back to them as a video to learn.”

In Whitley’s vision, the digital realm isn’t George Orwell’s Big Brother, dictating our every move from above. It’s more like the fox and the cat in the Pinnochio story, egging a naive child into the worst behaviours, all in the name of independence and free expression. “Social media encourage us to act more, to feel more, to express more, because the more we do that, the more capital they can generate from our data, and the more they can understand and predict what we’re likely to do next.”

This is where the politics comes in: the way “emotion, which incidentally is the real currency of dance, is now the major currency of the digital economy”.

It’s been a job of work, packing such cerebral content into an emotional form like dance. But Whitley says it’s what keeps him working, ” that sheer impossibility of pinning down ideas that otherwise exist almost entirely in words. As soon as you scratch the surface, you realise there’s huge amount of communication always at work through the body and drawing ideas from a more cerebral world into the physical, into the emotional, is a constant fascination. There are lifetimes of enquiry here. It’s what keeps me coming back.”

All the ghosts in the machine

Reading All the Ghosts in the Machine: Illusions of immortality in the digital age by Elaine Kasket for New Scientist, 22 June 2019

Moving first-hand interviews and unnervingly honest recollections weave through psychologist Elaine Kasket’s first mainstream book, All the Ghosts in the Machine, an anatomy of mourning in the digital age. Unravelling that architecture turns up two distinct but complementary projects.

The first offers some support and practical guidance for people (and especially family members) who are blindsided by the practical and legal absurdities generated when people die in the flesh, while leaving their digital selves very much alive.

For some, the persistence of posthumous data, on Facebook, Instagram or some other corner of the social media landscape, is a source of “inestimable comfort”. For others, it brings “wracking emotional pain”. In neither case is it clear what actions are required, either to preserve, remove or manage that data. As a result, survivors usually oversee the profiles of the dead themselves – always assuming, of course, that they know their passwords. “In an effort to keep the profile ‘alive’ and to stay connected to their dead loved one,” Kasket writes, “a bereaved individual may essentially end up impersonating them.”

It used to be the family who had privileged access to the dead, to their personal effects, writings and photographs. Families are, as a consequence, disproportionately affected by the persistent failure of digital companies to distinguish between the dead and the living.

Who has control over a dead person’s legacy? What unspoken needs are being trammelled when their treasured photographs evaporate or, conversely, when their salacious post-divorce Tinder messages are disgorged? Can an individual’s digital legacy even be recognised for what it is in a medium that can’t distinguish between life and death?

Kasket’s other project is to explore this digital uncanny from a psychoanalytical perspective. Otherwise admirable 19th-century ideals of progress, hygiene and personal improvement have conned us into imagining that mourning is a more or less understood process of “letting go”. Kasket’s account of how this idea gained currency is a finely crafted comedy of intellectual errors.

In fact, grief doesn’t come in stages, and our relationships with the dead last far longer than we like to imagine. All the Ghosts in the Machine opens with an account of the author’s attempt to rehabilitate her grandmother’s bitchy reputation by posting her love letters on Instagram.

“I took a private correspondence that was not intended for me and transformed it from its original functions. I wanted it to challenge others’ ideas, and to affect their emotions… Ladies and gentlemen of today, I present to you the deep love my grandparents held for one another in 1945, ‘True romance’, heart emoticon.”

Eventually, Kasket realised that the version of her grandmother her post had created was no more truthful than the version that had existed before. And by then, of course, it was far too late.

The digital persistence of the dead is probably a good thing in these dissociated times. A culture of continuing bonds with the dead is much to be preferred over one in which we are all expected to “get over it”. But, as Kasket observes, there is much work to do, for “the digital age has made continuing bonds easier and harder all at the same time.”

Asking for it

Reading The Metric Society: On the Quantification of the Social by Steffen Mau (Polity Press) for the Times Literary Supplement, 30 April 2019 

Imagine Steffen Mau, a macrosociologist (he plays with numbers) at Humboldt University of Berlin, writing a book about information technology’s invasion of the social space. The very tools he uses are constantly interrupting him. His bibliographic software wants him to assign a star rating to every PDF he downloads. A paper-sharing site exhorts him repeatedly to improve his citation score (rather than his knowledge). In a manner that would be funny, were his underlying point not so serious, Mau records how his tools keep getting in the way of his job.

Why does Mau use these tools at all? Is he too good for a typewriter? Of course he is: the whole history of civilisation is the story of us getting as much information as possible out of our heads and onto other media. It’s why, nigh-on 5000 years ago, the Sumerians dreamt up the abacus. Thinking is expensive. How much easier to stop thinking, and rely on data records instead!

The Metric Society, is not a story of errors made, or of wrong paths taken. This is a story, superbly reduced to the chill essentials of an executive summary, of how human society is getting exactly what it’s always been asking for. The last couple of years have seen more than 100 US cities pledge to use evidence and data to improve their decision-making. In the UK, “What Works Centres”, first conceived in the 1990s, are now responsible for billions in funding. The acronyms grow more bellicose, the more obscure they become. In the UK, the Alliance for Useful Evidence (with funding from ESRC, Big Lottery and Nesta) champions the use of evidence in social policy and practice.

Mau describes the emergence of a society trapped in “data-driven perpetual stock-taking”, in which the new Juggernaut of auditability lays waste to creativity, production, and even simple efficiency. “The magic attraction of numbers and comparisons is simply irresistible,” Mau writes.

It’s understandable. Our first great system of digital abstraction, money, enabled a more efficient and less locally bound exchange of good and services, and introduced a certain level of rational competition into the world of work.

But look where money has led us! Capital is not the point here. Neither is capitalism. The point is our relationship with information. Amazon’s algorithms are sucking all the localism out of the retail system, to the point where whole high streets have vanished — and entire communities with them. Amazon is in part powered by the fatuous metricisation of social variety through systems of scores, rankings, likes, stars and grades, which are (not coincidentally) the methods by which social media structures — from clownish Twitter to China’s Orwellian Social Credit System — turn qualitative differences into quantitative inequalities.

Mau leaves us thoroughly in the lurch. He’s a diagnostician, not a snake-oil salesman, and his bedside manner is distinctly chilly. Dazzled by data, which have relieved us of the need to dream and imagine, we fight for space on the foothills of known territory. The peaks our imaginations might have trod — as a society, and as a species — tower above us, ignored.

Choose-your-own adventure

Reading The Importance of Small Decisions by Michael O’Brien, R. Alexander Bentley and William Brock for New Scientist, 13 April 2019

What if you could map all kinds of human decision-making and use it to chart society’s evolution?

This is what academics Michael O’Brien, Alexander Bentley and William Brock try to do in The Importance of Small Decisions. It is an attempt to expand on a 2014 paper, “Mapping collective behavior in the big-data era”, that they wrote in Behavioral and Brain Sciences . While contriving to be somehow both too short and rambling, it bites off more than it can chew, nearly chokes to death on the ins and outs of group selection, and coughs up its best ideas in the last 40 pages.

Draw a graph. The horizontal axis maps decisions according to how socially influenced they are. The vertical axis tells you how clear the costs and pay-offs are for each decision. Rational choices sit in the north-western quadrant of the map. To the north-east, bearded capuchins teach each other how to break into palm nuts in a charming example of social learning (pictured). Twitter storms generated by fake news swirl about the south-east.

The more choices you face, the greater the cognitive load. The authors cite economist Eric Beinhocker, who in The Origin of Wealth calculated that human choices had multiplied a hundred million-fold in the past 10,000 years. Small and insignificant decisions now consume us.

Worse, costs and pay-offs are increasingly hidden in an ocean of informational white noise, so that it is easier to follow a trend than find an expert. “Why worry about the underlying causes of global warming when we can see what tens of millions of our closest friends think?” ask the authors, building to a fine, satirical climax.

In an effort to communicate widely, the authors have, I think, left out a few too many details from their original paper. And a mid-period novel by Philip K. Dick would paint a more visceral picture of a world created by too much information. Still, there is much fun to be had reading the garrulous banter of these three extremely smart academics.

The toughest job

First ice cream of the year

Parentology: Everything you wanted to know about the science of raising children but were too exhausted to ask by Dalton Conley
and
It’s Complicated: The social lives of networked teens by Danah Boyd
reviewed for New Scientist

 

As early as page 14 of Parentology, a neonatologist explains to Conley that his sure-to-be-premature daughter should stay in her mother’s womb as long as possible, since “each week is ten more points of IQ”. Conley was furious. “A spark of rage landed on my sleeve. An urge to grab the doctor’s head and bash it against the sharp corner of the sonogram machine seized hold… I wanted to smash his head one time for every IQ point,” he recalls.

For all its insightful, funny, fully researched, conscientiously cited, Freakonomics approach to science and statistics, what really powers Parentology is a species of loving rage. The numbers teach us a great deal about what parents cannot do, cannot change and cannot help. However, we learn something quite different and very valuable from Conley. Love, care, interest and empathy won’t change a child’s chances, but they render most of the measures discussed in this book profoundly unimportant.

By all means keep score – it’s a tough world out there, and your kids need all the help they can get. But if you measure your worth as a parent by the numbers, you’ve missed the point of the enterprise.

If parenting is about learning how little influence we have over people and events, then pity also the youths interviewed by Boyd for It’s Complicated. Patronised, legally marginalised and even subject to curfew, US teenagers – to hear Boyd tell it – have but one means to engage with the outside world: via the imperfect medium of the computer screen. “Obsessed” with social media, they are simply trying to recreate, for themselves and each other, a social space denied to them by anxious parents, hostile civic authorities and a mass media bent on exaggerating every conceivable outdoor danger.

Of course, a life online is not simply a life lived behind glass. There are serious problems with social media: chiefly, the obstacle they present to self-reinvention, and the ease with which bullies can weaponise them.

But Boyd has little time for technological determinism. Her fieldwork with worried-well parents and their kids reveals the fault is not in our computers but in ourselves, that we scare our kids into their bedrooms, then spy on them constantly once they’re there. And she marshals a huge body of sociological evidence, anecdotal and statistical, to support this.

Parents, you’ve had your chance. Of course you blew it. Now leave the kids alone.

Forced entry

 

Michelle Terry in Privacy. Image swiped from The Times

Michelle Terry in Privacy. Image swiped from The Times.

James Graham (@mrJamesGraham) writes plays for fringe venues that quickly transfer to huge auditoriums. This House, which began life in 2012 in the Cottesloe Theatre in London, sold out the flagship Olivier when it moved. Will something similar happen to Privacy, James’s almost-autobiographical journey through the internet? Probably. The version I saw at London’s Donmar Warehouse was witty, very accessible (ideal for school trips and citizenship classes), and turns the internet in general – and social media in particular – into a sort of politically chilling stage magic act. Right now the core of the piece – the disintegration of a personality when it’s continually second-guessed by all-seeing but unthinking machines – lies buried under a lot of stage business. (Much is made of a super-secret dramatic reversal that does not work at all.) But James means to keep the play abreast of current events so there’ll be plenty of time to iron out the wrinkles. Here’s the booking page, if you’re tempted: Privacy deserves a public.