“Intelligence is the wrong metaphor for what we’ve built”

Travelling From Apple to Anomaly, Trevor Paglen’s installation at the Barbican’s Curve gallery in London, for New Scientist, 9 October 2019

A COUPLE of days before the opening of Trevor Paglen’s latest photographic installation, From “Apple” to “Anomaly”, a related project by the artist found itself splashed all over the papers.

ImageNet Roulette is an online collaboration with artificial intelligence researcher Kate Crawford at New York University. The website invites you to provide an image of your face. An algorithm will then compare your face against a database called ImageNet and assign you to one or two of its 21,000 categories.

ImageNet has become one of the most influential visual data sets in the fields of deep learning and AI. Its creators at Stanford, Princeton and other US universities harvested more than 14 million photographs from photo upload sites and other internet sources, then had them manually categorised by some 25,000 workers on Amazon’s crowdsourcing labour site Mechanical Turk. ImageNet is widely used as a training data set for image-based AI systems and is the secret sauce within many key applications, from phone filters to medical imaging, biometrics and autonomous cars.

According to ImageNet Roulette, I look like a “political scientist” and a “historian”. Both descriptions are sort-of-accurate and highly flattering. I was impressed. Mind you, I’m a white man. We are all over the internet, and the neural net had plenty of “my sort” to go on.

Spare a thought for Guardian journalist Julia Carrie Wong, however. According to ImageNet Roulette she was a “gook” and a “slant-eye”. In its attempt to identify Wong’s “sort”, ImageNet Roulette had innocently turned up some racist labels.

From “Apple” to “Anomaly” also takes ImageNet to task. Paglen took a selection of 35,000 photos from ImageNet’s archive, printed them out and stuck them to the wall of the Curve gallery at the Barbican in London in a 50-metre-long collage.

The entry point is images labelled “apple” – a category that, unsurprisingly, yields mostly pictures of apples – but the piece then works through increasingly abstract and controversial categories such as “sister” and “racist”. (Among the “racists” are Roger Moore and Barack Obama; my guess is that being over-represented in a data set carries its own set of risks.) Paglen explains: “We can all look at an apple and call it by its name. An apple is an apple. But what about a noun like ‘sister’, which is a relational concept? What might seem like a simple idea – categorising objects or naming pictures – quickly becomes a process of judgement.”

The final category in the show is “anomaly”. There is, of course, no such thing as an anomaly in nature. Anomalies are simply things that don’t conform to the classification systems we set up.

Halfway along the vast, gallery-spanning collage of photographs, the slew of predominantly natural and environmental images peters out, replaced by human faces. Discrete labels here and there indicate which of ImageNet’s categories are being illustrated. At one point of transition, the group labelled “bottom feeder” consists entirely of headshots of media figures – there isn’t one aquatic creature in evidence.

Scanning From “Apple” to “Anomaly” gives gallery-goers many such unexpected, disconcerting insights into the way language parcels up the world. Sometimes, these threaten to undermine the piece itself. Passing seamlessly from “android” to “minibar”, one might suppose that we are passing from category to category according to the logic of a visual algorithm. After all, a metal man and a minibar are not so dissimilar. At other times – crossing from “coffee” to “poultry”, for example – the division between categories is sharp, leaving me unsure how we moved from one to another, and whose decision it was. Was some algorithm making an obscure connection between hens and beans?

Well, no: the categories were chosen and arranged by Paglen. Only the choice of images within each category was made by a trained neural network.

This set me wondering whether the ImageNet data set wasn’t simply being used as a foil for Paglen’s sense of mischief. Why else would a cheerleader dominate the “saboteur” category? And do all “divorce lawyers” really wear red ties?

This is a problem for art built around artificial intelligence: it can be hard to tell where the algorithm ends and the artist begins. Mind you, you could say the same about the entire AI field. “A lot of the ideology around AI, and what people imagine it can do, has to do with that simple word ‘intelligence’,” says Paglen, a US artist now based in Berlin, whose interest in computer vision and surveillance culture sprung from his academic career as a geographer. “Intelligence is the wrong metaphor for what we’ve built, but it’s one we’ve inherited from the 1960s.”

Paglen fears the way the word intelligence implies some kind of superhuman agency and infallibility to what are in essence giant statistical engines. “This is terribly dangerous,” he says, “and also very convenient for people trying to raise money to build all sorts of shoddy, ill-advised applications with it.”

Asked what concerns him more, intelligent machines or the people who use them, Paglen answers: “I worry about the people who make money from them. Artificial intelligence is not about making computers smart. It’s about extracting value from data, from images, from patterns of life. The point is not seeing. The point is to make money or to amplify power.”

It is a point by no means lost on a creator of ImageNet itself, Fei-Fei Li at Stanford University in California, who, when I spoke to Paglen, was in London to celebrate ImageNet’s 10th birthday at the Photographers’ Gallery. Far from being the face of predatory surveillance capitalism, Li leads efforts to correct the malevolent biases lurking in her creation. Wong, incidentally, won’t get that racist slur again, following ImageNet’s announcement that it was removing more than half of the 1.2 million pictures of people in its collection.

Paglen is sympathetic to the challenge Li faces. “We’re not normally aware of the very narrow parameters that are built into computer vision and artificial intelligence systems,” he says. His job as artist-cum-investigative reporter is, he says, to help reveal the failures and biases and forms of politics built into such systems.

Some might feel that such work feeds an easy and unexamined public paranoia. Peter Skomoroch, former principal data scientist at LinkedIn, thinks so. He calls ImageNet Roulette junk science, and wrote on Twitter: “Intentionally building a broken demo that gives bad results for shock value reminds me of Edison’s war of the currents.”

Paglen believes, on the contrary, that we have a long way to go before we are paranoid enough about the world we are creating.

Fifty years ago it was very difficult for marketing companies to get information about what kind of television shows you watched, what kinds of drinking habits you might have or how you drove your car. Now giant companies are trying to extract value from that information. “I think,” says Paglen, “that we’re going through something akin to England and Wales’s Inclosure Acts, when what had been de facto public spaces were fenced off by the state and by capital.”

Asking for it

Reading The Metric Society: On the Quantification of the Social by Steffen Mau (Polity Press) for the Times Literary Supplement, 30 April 2019 

Imagine Steffen Mau, a macrosociologist (he plays with numbers) at Humboldt University of Berlin, writing a book about information technology’s invasion of the social space. The very tools he uses are constantly interrupting him. His bibliographic software wants him to assign a star rating to every PDF he downloads. A paper-sharing site exhorts him repeatedly to improve his citation score (rather than his knowledge). In a manner that would be funny, were his underlying point not so serious, Mau records how his tools keep getting in the way of his job.

Why does Mau use these tools at all? Is he too good for a typewriter? Of course he is: the whole history of civilisation is the story of us getting as much information as possible out of our heads and onto other media. It’s why, nigh-on 5000 years ago, the Sumerians dreamt up the abacus. Thinking is expensive. How much easier to stop thinking, and rely on data records instead!

The Metric Society, is not a story of errors made, or of wrong paths taken. This is a story, superbly reduced to the chill essentials of an executive summary, of how human society is getting exactly what it’s always been asking for. The last couple of years have seen more than 100 US cities pledge to use evidence and data to improve their decision-making. In the UK, “What Works Centres”, first conceived in the 1990s, are now responsible for billions in funding. The acronyms grow more bellicose, the more obscure they become. In the UK, the Alliance for Useful Evidence (with funding from ESRC, Big Lottery and Nesta) champions the use of evidence in social policy and practice.

Mau describes the emergence of a society trapped in “data-driven perpetual stock-taking”, in which the new Juggernaut of auditability lays waste to creativity, production, and even simple efficiency. “The magic attraction of numbers and comparisons is simply irresistible,” Mau writes.

It’s understandable. Our first great system of digital abstraction, money, enabled a more efficient and less locally bound exchange of good and services, and introduced a certain level of rational competition into the world of work.

But look where money has led us! Capital is not the point here. Neither is capitalism. The point is our relationship with information. Amazon’s algorithms are sucking all the localism out of the retail system, to the point where whole high streets have vanished — and entire communities with them. Amazon is in part powered by the fatuous metricisation of social variety through systems of scores, rankings, likes, stars and grades, which are (not coincidentally) the methods by which social media structures — from clownish Twitter to China’s Orwellian Social Credit System — turn qualitative differences into quantitative inequalities.

Mau leaves us thoroughly in the lurch. He’s a diagnostician, not a snake-oil salesman, and his bedside manner is distinctly chilly. Dazzled by data, which have relieved us of the need to dream and imagine, we fight for space on the foothills of known territory. The peaks our imaginations might have trod — as a society, and as a species — tower above us, ignored.

Hot photography

Previewing an exhibition of photographs by Richard Mosse for New Scientist, 11 February 2017

Irish photographer Richard Mosse has come up with a novel way to inspire compassion for refugees. He presents them as drones might see them – as detailed heat maps, often shorn of expression, skin tone, and even clues to age and sex. Mosse’s subjects, captured in the Middle East, North Africa and Europe, don’t look back at us: the infrared camera renders their eyes as uniform black spaces.

Mosse has made a career out of repurposing photographic kit meant for military use. The images here show his subjects as seen, mostly at night, by a super-telephoto device designed for border and battlefield surveillance. Able to zoom in from 6 kilometres away, the camera anonymises them, making them strangely faceless even while their sweat, breath and sometimes blood circulation patterns are visible.

The results are almost closer to the nightmarish paintings of Hieronymus Bosch than the work of a documentary photographer. Making sense of them requires imagination and empathy: after all, this is how a smart weapon might see us.

Mosse came across his heat-mapping camera via a friend who worked on the BBC series Planet Earth. Legally classified as an advanced weapons system, the device is unwieldy and – with no user interface or handbook – difficult to use. But, working with cinematographer Trevor Tweeten, Mosse has managed to use it to make a 52-minute video. Incoming will wrap itself around visitors to the Curve Gallery at the Barbican arts centre in London from 15 February until 23 April.