Nuanced and terrifying at the same time

Reading The Drone Age by Michael J. Boyle for New Sceintist, 30 September 2020

Machines are only as good as the people who use them. Machines are neutral — just a faster, more efficient way of doing something that we always intended to do. That, anyway, is the argument wielded often by defenders of technology.

Michael Boyle, a professor of political science at LaSalle University in Philadelphia, isn’t buying: “the technology itself structures choices and induces changes in decision-making over time,” he explains, as he concludes his concise, comprehensive overview of the world the drone made. In everything from commerce to warfare, spycraft to disaster relief, our menu of choices “has been altered or constrained by drone technology itself”.

Boyle manages to be nuanced and terrifying at the same time. At one moment he’s pointing out the formidable practical obstacles in the way of anyone launching a major terrorist drone attack. In the next, he’s explaining why political assassinations by drone are just around the corner, Turn a page setting out the moral, operational and legal constraints keenly felt by upstanding US military drone pilots, and you’re confronted by their shadowy handlers in government, who operate with virtually no oversight.

Though grounded in just the right level of technical detail, The Drone Age describes, not so much the machines themselves, but the kind of thinking they’ve ushered in: an approach to problems that no longer distinguishes between peace and war.

In some ways this is a good thing. Assuming that war is inevitable, what’s not to welcome about a style of warfare that involves working through a kill list, rather than exterminating a significant proportion of the enemy’s population?
Well, two things. For US readers, there’s the way a few careful drone strikes proliferated under Obama and especially under Trump into a global counter-insurgency air platform. While for all of us, there’s the peacetime living is affected, too. “It is hard to feel like a human… when reduced to a pixelated dot under the gaze of a drone,” Boyle writes. If the pool of information gathered about us expands, but not the level of understanding or sympathy for us, where then i’s the positive for human society?

Boyle brings proper philosophical thinking to our relationship with technology. He’s particularly indebted to the French philosopher Jacques Ellul, whose The Technological Society (1964) transformed the way we think about machines. Ellul argued that when we apply technology to a problem, we adopt a mode of thinking that emphasizes efficiency and instrumental rationality, but also dehumanizes the problem.
Applying this lesson to drone technology, Boyle writes: “Instead of asking why we are using aircraft for a task in the first place, we tend to debate instead whether the drone is better than the manned alternative.”

This blinkered thinking, on the part of their operators, explains why drone activities almost invariably alienate the very people they are meant to benefit: non-combatants, people caught up in natural disasters, the relatively affluent denizens of major cities. Indeed, the drone’s ability to intimidate seems on balance to outweigh every other capability.

The UN has been known to fly unarmed Falco surveillance drones low to the ground to deter rebel groups from gathering. If you adopt the kind of thinking Ellul described, then this must be a good thing — a means of scattering hostels, achieved efficiently and safely. In reality, there’s no earthly reason to suppose violence has been avoided: only redistributed (and let’s not forget how Al Quaeda, decimated by constant drone strikes, has reinvented itself as a global internet brand).

Boyle warns us at the start that different models of drone vary so substantially “that they hardly look like the same technology”. And yet The Drone Age keeps this heterogenous flock of disruptive technologies together long enough to give it real historical and intellectual coherence. If you read one book about drones, this is the one. But it is just as valuable about surveillance, or the rise of information warfare, or the way the best intentions can turn the world we knew on its head.

Over-performing human

Talking to choreographer Alexander Whitley for the Financial Times,  28 February 2020

On a dim and empty stage, six masked black-clad dancers, half-visible, their limbs edged in light, run through attitude after attitude, emotion after emotion. Above the dancers, a long tube of white light slowly rises, falls, tips and circles, drawing the dancers’ limbs and faces towards itself like a magnet. Under its variable cold light, movements become more expressive, more laden with emotion, more violent.

Alexander Whitley, formerly of the Royal Ballet School and the Birmingham Royal Ballet, is six years into a project to expand the staging of dance with new media. He has collaborated with filmmakers, designers, digital artists and composers. Most of all, he has played games with light.

The experiments began with The Measures Taken, in 2014. Whitley used motion-tracking technology to project visuals that interacted with the performers’ movements. Then, dissatisfied with the way the projections obscured the dancers, in 2018 he used haze and narrowly focused bars of light to create, for Strange Stranger, a virtual “maze” in which his dancers found themselves alternately liberated and constrained.

At 70 minutes Overflow, commissioned by Sadler’s Wells Theatre, represents a massive leap in ambition. With several long-time collaborators — in particular the Dutch artist-designers Children of the Light — Whitley has worked out how to reveal, to an audience sat just a few feet away, exactly what he wants them to see.

Whitley is busy nursing Overflow up to speed in time for its spring tour. The company begin with a night at the Lowry in Salford on 18 March, before performing at Sadler’s Wells on 17 and 18 April.

Overflow, nearly two years in the making, has consumed money as well as time. The company is performing at Stereolux in Nantes in April and will need more overseas bookings if it is to flourish. “There’s serious doubt about the status of the UK and UK touring companies now,” says Whitley (snapping at my cheaply dangled Brexit bait); “I hope there’s enough common will to build relationships in spite of the political situation.”

It is easy to talk politics with Whitley (he is very well read), but his dances are anything but mere vehicles for ideas. And while Overflow is a political piece by any measure — a survey of our spiritual condition under survellance capitalism, for heaven’s sake — its effects are strikingly classical. It’s not just the tricksy lighting that has me thinking of the figures on ancient Greek vases. It’s the dancers themselves and their clean, elegant, tragedian’s gestures.

A dancer kneels, and takes hold of his head. He tilts it up into the light as it turns and tilts, inches from his face, and, in a shocking piece of trompe l’ioel — can he really be pulling his face apart?

Overflow is about our relationship to the machines that increasingly govern our lives. But there’s not a hint of regimentation here, or mechanisation. These dancers are not trying to perform machine. They’re trying to perform human.

Whitley laughs at this observation. “I guess, as far as that goes, they’re over-performing human. They’re caught up in the excitement and hyper-stimulation of their activity. Which is exactly how we interact with social media. We’re being hyperstimulated into excessive activity. Keep scrolling, keep consuming, keep engaging!”

It was an earlier piece, 2016’s Pattern Recognition, that set Whitley on the road to Overflow. “I’d decided to have the lights moving around the stage, to give us the sense of depth we’d struggled to achieve in The Measures Taken. But very few people I talked to afterwards realised or understood that our mobile stage lights were being driven by real-time tracking. They thought you could achieve what we’d achieved just through choreography. At which point a really obvious insight arrived: that interactivity is interesting, first and foremost, for the actor involved in the interaction.”

In Overflow, that the audience feels left out is no longer a technical problem: it’s the whole point of the piece. “We’re all watching things we shouldn’t be watching, somehow, through social media and the internet,” says Whitley. “That the world has become so revealed is unpleasant. It’s over-exposed us to elements of human nature that should perhaps remain private. But we’re all bound up in it. Even if we’re not doing it, we’re watching it.”

The movements of the ensemble in Overflow are the equivalent of emoji: “I was interested in how we could think of human emotions just as bits of data,” Whitley explains. In the 1980s a psychologist called Robert Plutchik stated that there were eight basic emotions: joy, trust, fear, surprise, sadness, anticipation, anger, and disgust. “We stuck pins at random into this wheel chart he invented, choosing an emotion at random, and from that creating an action that somehow embodied or represented it. And the incentive was to do so as quickly and concisely as possible, and as soon it’s done, choose another one. So the dancers are literally jumping at random between all these different human emotions. It’s not real communication, just an outpouring of emotional information.”

The solos are built using material drawn from each dancer’s movement diary. “The dancers made diary entries, which I then filmed, based on how they were feeling each day. They’re movement dairies: personal documents of their emotional lives, which I then chopped up and jumbled around and gave back to them as a video to learn.”

In Whitley’s vision, the digital realm isn’t George Orwell’s Big Brother, dictating our every move from above. It’s more like the fox and the cat in the Pinnochio story, egging a naive child into the worst behaviours, all in the name of independence and free expression. “Social media encourage us to act more, to feel more, to express more, because the more we do that, the more capital they can generate from our data, and the more they can understand and predict what we’re likely to do next.”

This is where the politics comes in: the way “emotion, which incidentally is the real currency of dance, is now the major currency of the digital economy”.

It’s been a job of work, packing such cerebral content into an emotional form like dance. But Whitley says it’s what keeps him working, ” that sheer impossibility of pinning down ideas that otherwise exist almost entirely in words. As soon as you scratch the surface, you realise there’s huge amount of communication always at work through the body and drawing ideas from a more cerebral world into the physical, into the emotional, is a constant fascination. There are lifetimes of enquiry here. It’s what keeps me coming back.”

“Intelligence is the wrong metaphor for what we’ve built”

Travelling From Apple to Anomaly, Trevor Paglen’s installation at the Barbican’s Curve gallery in London, for New Scientist, 9 October 2019

A COUPLE of days before the opening of Trevor Paglen’s latest photographic installation, From “Apple” to “Anomaly”, a related project by the artist found itself splashed all over the papers.

ImageNet Roulette is an online collaboration with artificial intelligence researcher Kate Crawford at New York University. The website invites you to provide an image of your face. An algorithm will then compare your face against a database called ImageNet and assign you to one or two of its 21,000 categories.

ImageNet has become one of the most influential visual data sets in the fields of deep learning and AI. Its creators at Stanford, Princeton and other US universities harvested more than 14 million photographs from photo upload sites and other internet sources, then had them manually categorised by some 25,000 workers on Amazon’s crowdsourcing labour site Mechanical Turk. ImageNet is widely used as a training data set for image-based AI systems and is the secret sauce within many key applications, from phone filters to medical imaging, biometrics and autonomous cars.

According to ImageNet Roulette, I look like a “political scientist” and a “historian”. Both descriptions are sort-of-accurate and highly flattering. I was impressed. Mind you, I’m a white man. We are all over the internet, and the neural net had plenty of “my sort” to go on.

Spare a thought for Guardian journalist Julia Carrie Wong, however. According to ImageNet Roulette she was a “gook” and a “slant-eye”. In its attempt to identify Wong’s “sort”, ImageNet Roulette had innocently turned up some racist labels.

From “Apple” to “Anomaly” also takes ImageNet to task. Paglen took a selection of 35,000 photos from ImageNet’s archive, printed them out and stuck them to the wall of the Curve gallery at the Barbican in London in a 50-metre-long collage.

The entry point is images labelled “apple” – a category that, unsurprisingly, yields mostly pictures of apples – but the piece then works through increasingly abstract and controversial categories such as “sister” and “racist”. (Among the “racists” are Roger Moore and Barack Obama; my guess is that being over-represented in a data set carries its own set of risks.) Paglen explains: “We can all look at an apple and call it by its name. An apple is an apple. But what about a noun like ‘sister’, which is a relational concept? What might seem like a simple idea – categorising objects or naming pictures – quickly becomes a process of judgement.”

The final category in the show is “anomaly”. There is, of course, no such thing as an anomaly in nature. Anomalies are simply things that don’t conform to the classification systems we set up.

Halfway along the vast, gallery-spanning collage of photographs, the slew of predominantly natural and environmental images peters out, replaced by human faces. Discrete labels here and there indicate which of ImageNet’s categories are being illustrated. At one point of transition, the group labelled “bottom feeder” consists entirely of headshots of media figures – there isn’t one aquatic creature in evidence.

Scanning From “Apple” to “Anomaly” gives gallery-goers many such unexpected, disconcerting insights into the way language parcels up the world. Sometimes, these threaten to undermine the piece itself. Passing seamlessly from “android” to “minibar”, one might suppose that we are passing from category to category according to the logic of a visual algorithm. After all, a metal man and a minibar are not so dissimilar. At other times – crossing from “coffee” to “poultry”, for example – the division between categories is sharp, leaving me unsure how we moved from one to another, and whose decision it was. Was some algorithm making an obscure connection between hens and beans?

Well, no: the categories were chosen and arranged by Paglen. Only the choice of images within each category was made by a trained neural network.

This set me wondering whether the ImageNet data set wasn’t simply being used as a foil for Paglen’s sense of mischief. Why else would a cheerleader dominate the “saboteur” category? And do all “divorce lawyers” really wear red ties?

This is a problem for art built around artificial intelligence: it can be hard to tell where the algorithm ends and the artist begins. Mind you, you could say the same about the entire AI field. “A lot of the ideology around AI, and what people imagine it can do, has to do with that simple word ‘intelligence’,” says Paglen, a US artist now based in Berlin, whose interest in computer vision and surveillance culture sprung from his academic career as a geographer. “Intelligence is the wrong metaphor for what we’ve built, but it’s one we’ve inherited from the 1960s.”

Paglen fears the way the word intelligence implies some kind of superhuman agency and infallibility to what are in essence giant statistical engines. “This is terribly dangerous,” he says, “and also very convenient for people trying to raise money to build all sorts of shoddy, ill-advised applications with it.”

Asked what concerns him more, intelligent machines or the people who use them, Paglen answers: “I worry about the people who make money from them. Artificial intelligence is not about making computers smart. It’s about extracting value from data, from images, from patterns of life. The point is not seeing. The point is to make money or to amplify power.”

It is a point by no means lost on a creator of ImageNet itself, Fei-Fei Li at Stanford University in California, who, when I spoke to Paglen, was in London to celebrate ImageNet’s 10th birthday at the Photographers’ Gallery. Far from being the face of predatory surveillance capitalism, Li leads efforts to correct the malevolent biases lurking in her creation. Wong, incidentally, won’t get that racist slur again, following ImageNet’s announcement that it was removing more than half of the 1.2 million pictures of people in its collection.

Paglen is sympathetic to the challenge Li faces. “We’re not normally aware of the very narrow parameters that are built into computer vision and artificial intelligence systems,” he says. His job as artist-cum-investigative reporter is, he says, to help reveal the failures and biases and forms of politics built into such systems.

Some might feel that such work feeds an easy and unexamined public paranoia. Peter Skomoroch, former principal data scientist at LinkedIn, thinks so. He calls ImageNet Roulette junk science, and wrote on Twitter: “Intentionally building a broken demo that gives bad results for shock value reminds me of Edison’s war of the currents.”

Paglen believes, on the contrary, that we have a long way to go before we are paranoid enough about the world we are creating.

Fifty years ago it was very difficult for marketing companies to get information about what kind of television shows you watched, what kinds of drinking habits you might have or how you drove your car. Now giant companies are trying to extract value from that information. “I think,” says Paglen, “that we’re going through something akin to England and Wales’s Inclosure Acts, when what had been de facto public spaces were fenced off by the state and by capital.”

Asking for it

Reading The Metric Society: On the Quantification of the Social by Steffen Mau (Polity Press) for the Times Literary Supplement, 30 April 2019 

Imagine Steffen Mau, a macrosociologist (he plays with numbers) at Humboldt University of Berlin, writing a book about information technology’s invasion of the social space. The very tools he uses are constantly interrupting him. His bibliographic software wants him to assign a star rating to every PDF he downloads. A paper-sharing site exhorts him repeatedly to improve his citation score (rather than his knowledge). In a manner that would be funny, were his underlying point not so serious, Mau records how his tools keep getting in the way of his job.

Why does Mau use these tools at all? Is he too good for a typewriter? Of course he is: the whole history of civilisation is the story of us getting as much information as possible out of our heads and onto other media. It’s why, nigh-on 5000 years ago, the Sumerians dreamt up the abacus. Thinking is expensive. How much easier to stop thinking, and rely on data records instead!

The Metric Society, is not a story of errors made, or of wrong paths taken. This is a story, superbly reduced to the chill essentials of an executive summary, of how human society is getting exactly what it’s always been asking for. The last couple of years have seen more than 100 US cities pledge to use evidence and data to improve their decision-making. In the UK, “What Works Centres”, first conceived in the 1990s, are now responsible for billions in funding. The acronyms grow more bellicose, the more obscure they become. In the UK, the Alliance for Useful Evidence (with funding from ESRC, Big Lottery and Nesta) champions the use of evidence in social policy and practice.

Mau describes the emergence of a society trapped in “data-driven perpetual stock-taking”, in which the new Juggernaut of auditability lays waste to creativity, production, and even simple efficiency. “The magic attraction of numbers and comparisons is simply irresistible,” Mau writes.

It’s understandable. Our first great system of digital abstraction, money, enabled a more efficient and less locally bound exchange of good and services, and introduced a certain level of rational competition into the world of work.

But look where money has led us! Capital is not the point here. Neither is capitalism. The point is our relationship with information. Amazon’s algorithms are sucking all the localism out of the retail system, to the point where whole high streets have vanished — and entire communities with them. Amazon is in part powered by the fatuous metricisation of social variety through systems of scores, rankings, likes, stars and grades, which are (not coincidentally) the methods by which social media structures — from clownish Twitter to China’s Orwellian Social Credit System — turn qualitative differences into quantitative inequalities.

Mau leaves us thoroughly in the lurch. He’s a diagnostician, not a snake-oil salesman, and his bedside manner is distinctly chilly. Dazzled by data, which have relieved us of the need to dream and imagine, we fight for space on the foothills of known territory. The peaks our imaginations might have trod — as a society, and as a species — tower above us, ignored.

Hot photography

Previewing an exhibition of photographs by Richard Mosse for New Scientist, 11 February 2017

Irish photographer Richard Mosse has come up with a novel way to inspire compassion for refugees. He presents them as drones might see them – as detailed heat maps, often shorn of expression, skin tone, and even clues to age and sex. Mosse’s subjects, captured in the Middle East, North Africa and Europe, don’t look back at us: the infrared camera renders their eyes as uniform black spaces.

Mosse has made a career out of repurposing photographic kit meant for military use. The images here show his subjects as seen, mostly at night, by a super-telephoto device designed for border and battlefield surveillance. Able to zoom in from 6 kilometres away, the camera anonymises them, making them strangely faceless even while their sweat, breath and sometimes blood circulation patterns are visible.

The results are almost closer to the nightmarish paintings of Hieronymus Bosch than the work of a documentary photographer. Making sense of them requires imagination and empathy: after all, this is how a smart weapon might see us.

Mosse came across his heat-mapping camera via a friend who worked on the BBC series Planet Earth. Legally classified as an advanced weapons system, the device is unwieldy and – with no user interface or handbook – difficult to use. But, working with cinematographer Trevor Tweeten, Mosse has managed to use it to make a 52-minute video. Incoming will wrap itself around visitors to the Curve Gallery at the Barbican arts centre in London from 15 February until 23 April.