One of those noodly problems

Reading The Afterlife of Data by Carl Öhman for the Spectator

They didn’t call Diogenes “the Cynic” for nothing. He lived to shock the (ancient Greek) world. When I’m dead, he said, just toss my body over the city walls to feed the dogs. The bit of me that I call “I” won’t be around to care.

The revulsion we feel at this idea tells us something important: that the dead can be wronged. Diogenes may not care what happens to his corpse, but we do. And doing right by the dead is a job of work. Some corpses are reduced to ash, some are buried, and some are fed to vultures. In each case the survivors all feel, rightly, that they have treated their loved ones’ remains with respect.

What should we do with our digital remains?

This sounds like one of those noodly problems that keep digital ethicists like Öhman in grant money — but some of the stories in The Afterlife of Data are sure to make the most sceptical reader stop and think. There’s something compelling, and undeniably moving, in one teenager’s account of how, ten years after losing his father, he found they could still play together; at least, he could compete against his dad’s last outing on an old XBox racing game.

Öhman is not spinning ghost stories here. He’s not interested in digital afterlives. He’s interested in remains, and in emerging technologies that, from the digital data we inadvertently leave behind, fashion our artificially intelligent simulacra. (You may think this is science fiction, but Microsoft doesn’t, and has already taken out several patents.)

This rapidly approaching future, Öhman argues, seems uncanny only because death itself is uncanny. Why should a chatty AI simulacrum prove any more transgressive than, say, a photograph of your lost love, given pride of place on the mantelpiece? We got used to the one; in time we may well get used to the other.

What should exercise us is who owns the data. As Öhman argues, ‘if we leave the management of our collective digital past solely in the hands of industry, the question “What should we do with the data of the dead?” becomes solely a matter of “What parts of the past can we make money on?”’

The trouble with a career in digital ethics is that however imaginative and insightful you get, you inevitably end up playing second-fiddle to some early episode of Charlie Brooker’s TV series Black Mirror. The one entitled “Be Right Back”, in which a dead lover returns in robot form to market upgrades of itself to the grieving widow, stands waiting at the end of almost every road Öhman travels here.

Öhman reminds us that the digital is a human realm, and one over which we can and must and must exert our values. Unless we actively delete them (in a sort of digital cremation, I suppose) our digital dead are not going away, and we are going to have to accommodate them somehow.

A more modish, less humane writer would make the most of the fact that recording has become the norm, so that, as Öhman puts it, “society now takes place in a domain previously reserved for the dead, namely the archive.” (And, to be fair, Öhman does have a lot of fun with the idea that by 2070, Facebook’s dead will outnumber its living.)

Ultimately, though, Öhman draws readers through the digital uncanny to a place of responsibility. Digital remains are not just a representation of the dead, he says, “they are the dead, an informational corpse constitutive of a personal identity.”

Öhman’s lucid, closely argued foray into the world of posthumous data is underpinned by this sensible definition of what constitute a person: “A person,” he says, “is the narrative object that we refer to when speaking of someone (including ourselves) in the third person. Persons extend beyond the selves that generate them.” If I disparage you behind your back, I’m doing you a wrong, even though you don’t know about it. If I disparage you after you’re dead, I’m still doing you wrong, though you’re no longer around to be hurt.

Our job is to take ownership of each others’ digital remains and treat them with human dignity. The model Öhman holds up for us to emulate is the Bohemian author and composer Max Brod, who had the unenviable job of deciding what to do with manuscripts left behind by his friend Franz Kafka, who wanted him to burn them. In the end Brod decided that the interests of “Kafka”, the informational body constitutive of a person, overrode (barely) the interests of Franz his no-longer-living friend.

What to do with our digital remains? Öhman’s excellent reply treats this challenge with urgency, sanity and, best of all, compassion. Max Brod’s decision wasn’t and isn’t obvious, and really, the best you can do in these situations is to make the error you and others can best live with.

“Intelligence is the wrong metaphor for what we’ve built”

Travelling From Apple to Anomaly, Trevor Paglen’s installation at the Barbican’s Curve gallery in London, for New Scientist, 9 October 2019

A COUPLE of days before the opening of Trevor Paglen’s latest photographic installation, From “Apple” to “Anomaly”, a related project by the artist found itself splashed all over the papers.

ImageNet Roulette is an online collaboration with artificial intelligence researcher Kate Crawford at New York University. The website invites you to provide an image of your face. An algorithm will then compare your face against a database called ImageNet and assign you to one or two of its 21,000 categories.

ImageNet has become one of the most influential visual data sets in the fields of deep learning and AI. Its creators at Stanford, Princeton and other US universities harvested more than 14 million photographs from photo upload sites and other internet sources, then had them manually categorised by some 25,000 workers on Amazon’s crowdsourcing labour site Mechanical Turk. ImageNet is widely used as a training data set for image-based AI systems and is the secret sauce within many key applications, from phone filters to medical imaging, biometrics and autonomous cars.

According to ImageNet Roulette, I look like a “political scientist” and a “historian”. Both descriptions are sort-of-accurate and highly flattering. I was impressed. Mind you, I’m a white man. We are all over the internet, and the neural net had plenty of “my sort” to go on.

Spare a thought for Guardian journalist Julia Carrie Wong, however. According to ImageNet Roulette she was a “gook” and a “slant-eye”. In its attempt to identify Wong’s “sort”, ImageNet Roulette had innocently turned up some racist labels.

From “Apple” to “Anomaly” also takes ImageNet to task. Paglen took a selection of 35,000 photos from ImageNet’s archive, printed them out and stuck them to the wall of the Curve gallery at the Barbican in London in a 50-metre-long collage.

The entry point is images labelled “apple” – a category that, unsurprisingly, yields mostly pictures of apples – but the piece then works through increasingly abstract and controversial categories such as “sister” and “racist”. (Among the “racists” are Roger Moore and Barack Obama; my guess is that being over-represented in a data set carries its own set of risks.) Paglen explains: “We can all look at an apple and call it by its name. An apple is an apple. But what about a noun like ‘sister’, which is a relational concept? What might seem like a simple idea – categorising objects or naming pictures – quickly becomes a process of judgement.”

The final category in the show is “anomaly”. There is, of course, no such thing as an anomaly in nature. Anomalies are simply things that don’t conform to the classification systems we set up.

Halfway along the vast, gallery-spanning collage of photographs, the slew of predominantly natural and environmental images peters out, replaced by human faces. Discrete labels here and there indicate which of ImageNet’s categories are being illustrated. At one point of transition, the group labelled “bottom feeder” consists entirely of headshots of media figures – there isn’t one aquatic creature in evidence.

Scanning From “Apple” to “Anomaly” gives gallery-goers many such unexpected, disconcerting insights into the way language parcels up the world. Sometimes, these threaten to undermine the piece itself. Passing seamlessly from “android” to “minibar”, one might suppose that we are passing from category to category according to the logic of a visual algorithm. After all, a metal man and a minibar are not so dissimilar. At other times – crossing from “coffee” to “poultry”, for example – the division between categories is sharp, leaving me unsure how we moved from one to another, and whose decision it was. Was some algorithm making an obscure connection between hens and beans?

Well, no: the categories were chosen and arranged by Paglen. Only the choice of images within each category was made by a trained neural network.

This set me wondering whether the ImageNet data set wasn’t simply being used as a foil for Paglen’s sense of mischief. Why else would a cheerleader dominate the “saboteur” category? And do all “divorce lawyers” really wear red ties?

This is a problem for art built around artificial intelligence: it can be hard to tell where the algorithm ends and the artist begins. Mind you, you could say the same about the entire AI field. “A lot of the ideology around AI, and what people imagine it can do, has to do with that simple word ‘intelligence’,” says Paglen, a US artist now based in Berlin, whose interest in computer vision and surveillance culture sprung from his academic career as a geographer. “Intelligence is the wrong metaphor for what we’ve built, but it’s one we’ve inherited from the 1960s.”

Paglen fears the way the word intelligence implies some kind of superhuman agency and infallibility to what are in essence giant statistical engines. “This is terribly dangerous,” he says, “and also very convenient for people trying to raise money to build all sorts of shoddy, ill-advised applications with it.”

Asked what concerns him more, intelligent machines or the people who use them, Paglen answers: “I worry about the people who make money from them. Artificial intelligence is not about making computers smart. It’s about extracting value from data, from images, from patterns of life. The point is not seeing. The point is to make money or to amplify power.”

It is a point by no means lost on a creator of ImageNet itself, Fei-Fei Li at Stanford University in California, who, when I spoke to Paglen, was in London to celebrate ImageNet’s 10th birthday at the Photographers’ Gallery. Far from being the face of predatory surveillance capitalism, Li leads efforts to correct the malevolent biases lurking in her creation. Wong, incidentally, won’t get that racist slur again, following ImageNet’s announcement that it was removing more than half of the 1.2 million pictures of people in its collection.

Paglen is sympathetic to the challenge Li faces. “We’re not normally aware of the very narrow parameters that are built into computer vision and artificial intelligence systems,” he says. His job as artist-cum-investigative reporter is, he says, to help reveal the failures and biases and forms of politics built into such systems.

Some might feel that such work feeds an easy and unexamined public paranoia. Peter Skomoroch, former principal data scientist at LinkedIn, thinks so. He calls ImageNet Roulette junk science, and wrote on Twitter: “Intentionally building a broken demo that gives bad results for shock value reminds me of Edison’s war of the currents.”

Paglen believes, on the contrary, that we have a long way to go before we are paranoid enough about the world we are creating.

Fifty years ago it was very difficult for marketing companies to get information about what kind of television shows you watched, what kinds of drinking habits you might have or how you drove your car. Now giant companies are trying to extract value from that information. “I think,” says Paglen, “that we’re going through something akin to England and Wales’s Inclosure Acts, when what had been de facto public spaces were fenced off by the state and by capital.”

The disaster of the cloud itself


Tung-Hui Hu’s A Prehistory of the Cloud reviewed for New Scientist

LAST week, to protect my photographs of a much-missed girlfriend, I told all my back-up services to talk to each other. My snaps have since been multiplying like the runaway brooms in Disney’s Fantasia, and I have spent days trying to delete them.

Apart from being an idiot, I got into this fix because my data has been placed at one invisible but crucial remove in the cloud, zipping between energy-hungry servers scattered across the globe at the behest of algorithms I do not understand.

By duplicating our digital media to different servers, we insure against loss. The more complex and interwoven these back-up systems become, though, the more insidious our losses. Sync errors swallow documents whole. In the hands of most of us, JPEGs degrade a tiny bit each time they are saved. And all formats fall out of fashion eventually.

“Thus disaster recovery in the cloud often protects us against the disaster of the cloud itself,” says Tung-Hui Hu, a former network engineer whose A Prehistory of the Cloud poses some hard questions of our digital desires. Why are our commercial data centres equipped with iris and palm recognition systems? Why is Stockholm’s most highly publicised data centre housed in a bunker originally built to defend against nuclear attack?

Hu identifies two impulses: “First, a paranoid desire to pre-empt the enemy by maintaining vigilance in the face of constant threat, and second, a melancholic fantasy of surviving the eventual disaster by entombing data inside highly secured data vaults.”

The realm of the cloud does not countenance loss, but when we touch it, we corrupt it. The word for such a system – a memory that preserves, encrypts and mystifies a lost love-object – is melancholy. Hu’s is a deeply melancholy book and for that reason, a valuable one.