The mind unlocked

Reading The Battle for Your Brain by Nita Farahany for New Scientist, 19 April 2023

Iranian-American ethicist and lawyer Nita Farahany is no stranger to neurological intervention. She has sought relief from her chronic migraines in “triptans, anti-seizure drugs, antidepressants, brain enhancers, and brain diminishers. I’ve had neurotoxins injected into my head, my temples, my neck, and my shoulders; undergone electrical stimulation, transcranial direct current stimulation, MRIs, EEGs, fMRIs, and more.”

Few know better than Farahany what neurotechnology can do for people’s betterment, and this lends weight to her sombre and troubling account of a field whose speed of expansion alone should give us pause.

Companies like Myontec, Athos, Delsys and Noraxon already offer electromyography-generated insights to athletes and sports therapists. Control Bionics sells NeuroNode, a wearable EMG device for patients with degenerative neurological disorders, enabling them to control a computer, tablet, or motorised device. Neurable promises “the mind unlocked” with its “smart headphones for smarter focus.” And that’s before we even turn to the fast-growing interest in implantable devices; Synchron, Blackrock Neurotech and Elon Musk’s Neuralink all have prototypes in advanced stages of development.

Set aside the legitimate medical applications for a moment; Farahany is concerned that neurotech applications that used to let us play video games, meditate, or improve our focus have opened the way to a future of brain transparency “in which scientists, doctors, governments, and companies may peer into our brains and minds at will.”

Think it can’t be done? Think again. In 2017 A research team led by UC Berkeley computer scientist Dawn Song reported an experiment in which videogamers used a neural interface to control a video game. As they played, the researchers inserted subliminal images into the game and watched for unconscious recognition signals. This game of neurological Battleships netted them one player’s credit card PIN code — and their home address.

Now Massachusetts-based Brainwave Science is selling a technology called iCognative, which can extract information from people’s brains. At least, suspects are shown pictures related to crimes and cannot help but recognise whatever they happen to recognise. For example, a murder weapon. Emirati authorities have already successfully prosecuted two cases using this technology.

This so-called “brain fingerprinting” technique is as popular with governments (Bangladesh, India, Singapore, Australia) as it is derided by many scientists.

More worrying are the efforts of companies, in the post-Covid era, to use neurotech in their continuing effort to control the home-working environment. So-called “bossware” programmes already take regular screenshots of employees’ work, monitor their keystrokes and web usage, and photograph them at (or not at) their desks. San Francisco bioinformatics company Emotiv now offers to help manage your employees’ attention with its MN8 earbuds. These can indeed be used to listen to music or participate in conference calls — and also, with just two electrodes, one in each ear, they claim to be able to record employees’ emotional and cognitive functions in real time.

It’ll come as no surprise if neurotech becomes a requirement in modern workplaces: no earbuds, no job. This sort of thing has happened many times already.

“As soon as [factory] workers get used to the new system their pay is cut to the former level,” complained Vladimir Lenin in 1912. “The capitalist attains an enormous profit for the workers toil four times as hard as before and wear down their nerves and muscles four times as fast as before.”

Six years later, he approved funding for a Taylorist research institute. Say what you like about industrial capitalism, its logic is ungainsayable.

Farahany has no quick fixes to offer for this latest technological assault on the mind — “the one place of solace to which we could safely and privately retreat”. Her book left me wondering what to be more afraid of: the devices themselves, or the glee with which powerful institutions seize upon them.

“The solutions are not even in the works”

Reading Pegasus by Laurent Richard and Sandrine Rigaud for New Scientist, 1 February 2023.

Fifty thousand?”

Edward Snowden’s 2013 leaks from the US National Security Agency had triggered a global debate around state surveillance — and even he couldn’t quite believe the scale of the story as it was described to him in the summer of 2021.

Whistle-blowers had handed French investigative journalists Laurent Richard and and Sandrine Rigaud a list of 50,000 phone numbers. These belonged to people flagged for attack by a cybersurveillance software package called Pegasus.

The journalistic investigation that followed is the subject of this non-fiction thriller: a must-read for anyone remotely interested in cryptography and communications, and a dreadful warning for the rest of us. “Regular civilians being targeted with military-grade surveillance weapons — against their will, against their knowledge, and with no recourse — is a dystopian future we really are careening toward,” the authors warn, “if we don’t understand this threat and move to stop it.”

Pegasus offers a fascinating insight into how journalism has evolved to tackle a hyper-connected world. Eye witnesses and whistle-blowers have better access than ever before to sympathetic campaigning journalists from all over the world. But of course, this advantage is shared with the very governments and corporations and organised crime networks that want to silence them.

To drag Pegasus into the light, Laurent’s Forbidden Stories consortium choreographed the activities of more than eighty investigative journalists from seventeen media organisations across four continents and eight languages.

The consortium got together in March 2021 knowing full well that they would have to conclude their investigation by June, by which time Pegasus’ creators at the Israeli company NSO were bound to twig that their brainchild was being hacked.

The bigger the names on that phone list, the harder it would be to keep any investigation under wraps. Early on the name of Jorge Carrasco cropped up: the lead partner in Forbidden Stories’ massive cross-border collaboration to finish the investigations of murdered Mexican journalist Regina Martínez. Then things just got silly: a son of Turkish president Recep Erdogan turned up; and then the names of half the French cabinet. Also the cell number for Emmanuel Macron, the president of France. Laurent Richard recalls, “Macron was the name that made me realise how truly dangerous it was to have access to this list.”

In a pulse-accelerating account that’s never afraid to dip into well-crafted technical detail, the authors explain how Pegasus gains free rein on a mobile device, without ever tipping off the owner to its presence. Needless to say it evolved out of software designed to serve baffled consumers waiting in long queues on tech support call lines. Shalev Hulio and Omri Lavie, who would go on to found NSO and create Pegasus, cut their teeth developing programmes that allowed support technicians to take charge of the caller’s phone.

It was not long before a European intelligence service came calling. Sold and maintained for more than sixty clients in more than forty countries, Pegasus gave security services an edge over terrorists, criminal gangs and paedophiles — and also, as it’s turned out, over whistleblowers, campaigners, political opponents, journalists, and at least one Emirati princess trying to get custody of her children. This book is not a diatribe against the necessary (or at any rate ubiquitous) business of government surveillance and espionage. It is about how, in the contest between ordinary people and the powerful, software is tilting the field wildly in the latter’s favour.

The international journalistic collaboration that was the Pegasus Project sparked the biggest global surveillance scandal since Snowden; it’s led to a European Parliament inquiry into government spyware, legal action from major technology companies, government sanctions against the NSO Group and countless individual legal complaints. But the authors spend little time sitting in their laurels. Pegasus may be dead, but demand for a successor is only growing. In the gap left by NSO, certain governments are making offers to certain tech companies that add zeroes to the fees NSO enjoyed. Nor do the authors expect much to come out of the public debate that has followed their investigation: “The issues… might have been raised,” they concede, “but the solutions are not even in the works.”

How to prevent the future

Reading Gerd Gigerenzer’s How to Stay Smart in a Smart World for the Times, 26 February 2022

Some writers are like Moses. They see further than everybody else, have a clear sense of direction, and are natural leaders besides. These geniuses write books that show us, clearly and simply, what to do if we want to make a better world.

Then there are books like this one — more likeable, and more honest — in which the author stumbles upon a bottomless hole, sees his society approaching it, and spends 250-odd pages scampering about the edge of the hole yelling at the top of his lungs — though he knows, and we know, that society is a machine without brakes, and all this shouting comes far, far too late.

Gerd Gigerenzer is a German psychologist who has spent his career studying how the human mind comprehends and assesses risk. We wouldn’t have lasted even this long as a species if we didn’t negotiate day-to-day risks with elegance and efficiency. We know, too, that evolution will have forced us formulate the quickest, cheapest, most economical strategies for solving our problems. We call these strategies “heuristics”.

Heuristics are rules of thumb, developed by extemporising upon past experiences. They rely on our apprehension of, and constant engagement in, the world beyond our heads. We can write down these strategies; share them; even formalise them in a few lines of light-weight computer code.

Here’s an example from Gigerenzer’s own work: Is there more than one person in that speeding vehicle? Is it slowing down as ordered? Is the occupant posing any additional threat?

Abiding by the rules of engagement set by this tiny decision tree reduces civilian casualties at military checkpoints by more than sixty per cent.

We can apply heuristics to every circumstance we are likely to encounter, regardless of the amount of data available. The complex algorithms that power machine learning, on the other hand, “work best in well-defined, stable situations where large amounts of data are available”.

What happens if we decide to hurl 200,000 years of heuristics down the toilet, and kneel instead at the altar of occult computation and incomprehensibly big data?

Nothing good, says Gigerenzer.

How to Stay Smart is a number of books in one, none of which, on its own, is entirely satisfactory.

It is a digital detox manual, telling us how our social media are currently weaponised, designed to erode our cognition (but we can fill whole shelves with such books).

It punctures many a rhetorical bubble around much-vaunted “artificial intelligence”, pointing out how easy it is to, say, get a young man of colour charged without bail using proprietary risk-assessment software. (In some notorious cases the software had been trained on, and so was liable to perpetuate, historical injustices.) Or would you prefer to force an autonomous car to crash by wearing a certain kind of T-shirt? (Simple, easily generated pixel patterns cause whole classes of networks to draw bizarre inferential errors about the movement of surrounding objects.) This is enlightening stuff, or it would be, were the stories not quite so old.

One very valuable section explains why forecasts derived from large data sets become less reliable, the more data they are given. In the real world, problems are unbounded; the amount of data relevant to any problem is infinite. This is why past information is a poor guide to future performance, and why the future always wins. Filling a system with even more data about what used to happen will only bake in the false assumptions that are already in your system. Gigerenzer goes on to show how vested interests hide this awkward fact behind some highly specious definitions of what a forecast is.

But the most impassioned and successful of these books-within-a-book is the one that exposes the hunger for autocratic power, the political naivety, and the commercial chicanery that lie behind the rise of “AI”. (Healthcare AI is a particular bugbear: the story of how the Dutch Cancer Society was suckered into funding big data research, at the expense of cancer prevention campaigns that were shown to work, is especially upsetting).

Threaded through this diverse material is an argument Gigerenzer maybe should have made at the beginning: that we are entering a new patriarchal age, in which we are obliged to defer, neither to spiritual authority, nor to the glitter of wealth, but to unliving, unconscious, unconscionable systems that direct human action by aping human wisdom just well enough to convince us, but not nearly well enough to deliver happiness or social justice.

Gigerenzer does his best to educate and energise us against this future. He explains the historical accidents that led us to muddle cognition with computation in the first place. He tells us what actually goes on, computationally speaking, behind the chromed wall of machine-learning blarney. He explains why, no matter how often we swipe right, we never get a decent date; he explains how to spot fake news; and he suggests how we might claw our minds free of our mobile phones.

But it’s a hopeless effort, and the book’s most powerful passages explain exactly why it is hopeless.

“To improve the performance of AI,” Gigerenzer explains, “one needs to make the physical environment more stable and people’s behaviour more predictable.”

In China, the surveillance this entails comes wrapped in Confucian motley: under its social credit score system, sincerity, harmony and wealth creation trump free speech. In the West the self-same system, stripped of any ethic, is well advanced thanks to the efforts of the credit-scoring industry. One company, Acxiom, claims to have collected data from 700 million people worldwide, and up to 3000 data points for each individual (and quite a few are wrong).

That this bumper data harvest is an encouragement to autocratic governance hardly needs rehearsing, or so you would think.

And yet, in a 2021 study of 3,446 digital natives, 96 per cent “do not know how to check the trustworthiness of sites and posts.” I think Gigerenzer is pulling his punches here. What if, as seems more likely, 96 per cent of digital natives can’t be bothered to check the trustworthiness of sites and posts?

Asked by the author in a 2019 study how much they would be willing to spend each month on ad-free social media — that is, social media not weaponised against the user — 75 per cent of respondents said they would not pay a cent.

Have we become so trivial, selfish, short-sighted and penny-pinching that we deserve our coming subjection? Have we always been servile at heart, for all our talk of rights and freedoms; desperate for some grown-up come tug at our leash, and bring us to heal?

You may very well think so. Gigerenzer could not possibly comment. He does, though, remark that operant conditioning (the kind of learning explored in the 1940s by behaviourist B F Skinner, that occurs through rewards and punishments) has never enjoyed such political currency, and that “Skinner’s dream of a society where the behaviour of each member is strictly controlled by reward has become reality.”

How to Stay Smart in a Smart World is an optimistic title indeed for a book that maps, with passion and precision, a hole down which we are already plummeting.

“A perfect storm of cognitive degradation”

Reading Johann Hari’s Stolen Focus: Why you can’t pay attention for the Telegraph, 2 January 2022

Drop a frog into boiling water, and it will leap from the pot. Drop it into tepid water, brought slowly to the boil, and the frog will happily let itself be cooked to death.

Just because this story is nonsense, doesn’t mean it’s not true — true of people, I mean, and their tendency to acquiesce to poorer conditions, just so long as these conditions are introduced slowly enough. (Remind yourself of this next time you check out your own groceries at the supermarket.)

Stolen Focus is about how our environment is set up to fracture our attention. It starts with our inability to set the notifications correctly on our mobile phones, and ends with climate change. Johann Hari thinks a huge number of pressing problems are fundamentally related, and that the human mind is on the receiving end of what amounts to a denial-of-service attack. One of Hari’s many interviewees is Earl Miller from MIT, who talks about “a perfect storm of cognitive degradation, as a result of distraction”; to which Hari adds the following, devastating gloss: “We are becoming less rational less intelligent, less focused.”

To make such a large argument stick, though, Hari must ape the wicked problem he’s addressing: he must bring the reader to a slow boil.

Stolen Focus begins with an extended grumble about how we don’t read as many books as we used to, or buy as many newspapers, and how we are becoming increasingly enslaved to our digital devices. Why we should listen to Hari in particular, admittedly a latecomer to the “smartphones bad, books good” campaign, is not immediately apparent. His account of his own months-long digital detox — idly beachcombing the shores of Provincetown at the northern tip of Cape Cod, War and Peace tucked snugly into his satchel — is positively maddening.

What keeps the reader engaged are the hints (very well justified, it turns out) that Hari is deliberately winding us up.

He knows perfectly well that most of us have more or less lost the right to silence and privacy — that there will be no Cape Cod for you and me, in our financial precarity.

He also knows, from bitter experience, that digital detoxes don’t work. He presents himself as hardly less of a workaholic news-freak than he was before taking off to Massachusetts.

The first half of Stolen Focus got me to sort out my phone’s notification centre, and that’s not nothing; but it is, in the greater scheme of Hari’s project, hardly more than a parody of the by now very familiar “digital diet book” — the sort of book that, as Hari eventually points out, can no more address the problems filling this book than a diet book can address epidemic obesity.

Many of the things we need to do to recover our attention and focus “are so obvious they are banal,” Hari writes: “slow down, do one thing at a time, sleep more… Why can’t we do the obvious things that would improve our attention? What forces are stopping us?”

So, having had his fun with us, Hari begins to sketch in the high sides of the pot in which he finds us being coddled.

The whole of the digital economy is powered by breaks in our attention. The finest minds in the digital business are being paid to create ever-more-addicting experiences. According to former Google engineer Tristan Harris, “we shape more than eleven billion interruptions to people’s lives every day.” Aza Raskin, co-founder of the Center for Humane Technology, calls the big tech companies “the biggest perpetrators of non-mindfulness in the world.”

Social media is particularly insidious, promoting outrage among its users because outrage is wildly more addictive than real news. Social media also promotes loneliness. Why? Because lonely people will self-medicate with still more social media. (That’s why Facebook never tells you which of your friends are nearby and up for a coffee: Facebook can’t make money from that.)

We respond to the anger and fear a digital diet instils with hypervigilance, which wrecks our attention even further and damages our memory to boot. If we have children, we’ll keep them trapped at home “for their own safety”, though our outdoor spaces are safer than they have ever been. And when that carceral upbringing shatters our children’s attention (as it surely will), we stuff them with drugs, treating what is essentially an environmental problem. And on and on.

And on. The problem is not that Stolen Focus is unfocused, but that it is relentless: an unfeasibly well-supported undergraduate rant that swells — as the hands of the clock above the bar turn round and the beers slide down — to encompass virtually every ill on the planet, from rubbish parenting to climate change.

“If the ozone layer was threatened today,” writes Hari, “the scientists warning about it would find themselves being shouted down by bigoted viral stories claiming the threat was all invented by the billionaire George Soros, or that there’s no such thing as the ozone layer anyway, or that the holes were really being made by Jewish space lasers.”

The public campaign Hari wants Stolen Focus to kick-start (there’s an appendix; there’s a weblink; there’s a newsletter) involves, among other things, a citizen’s wage, outdoor play, limits on light pollution, public ownership of social media, changes in the food supply, and a four-day week. I find it hard to disagree with any of it, but at the same time I can’t rid myself of the image of how, spiritually refreshed by War and Peace, consumed in just a few sittings in a Provincetown coffee shop, Hari must (to quote Stephen Leacock) have “flung himself from the room, flung himself upon his horse and rode madly off in all directions”.

If you read just one book about how the modern world is driving us crazy, read this one. But why would you read just one?

“Grotesque, awkward, and disagreeable”

Reading Stanislaw Lem’s Dialogues for the Times, 5 October 2021

Some writers follow you through life. Some writers follow you beyond the grave. I was seven when Andrei Tarkovsky filmed Lem’s satirical sci-fi novel Solaris, thirty seven when Steven Soderbergh’s very different (and hugely underrated) Solaris came out, forty when Lem died. Since then, a whole other Stanslaw Lem has arisen, reflected in philosophical work that, while widely available elsewhere, had to wait half a century or more for an English translation. In life I have nursed many regrets: that I didn’t learn Polish is not the least of them.

The point about Lem is that he writes about the future, predicting the way humanity’s inveterate tinkering will enable, pervert and frustrate its ordinary wants and desires. This isn’t “the future of technology” or “the future of the western world” or “the future of the environment”. It’s neither “the future as the author would like it to be”, nor “the future if the present moment outstayed its welcome”. Lem knows a frightening amount of science, and even more about technology, but what really matters is what he knows about people. His writing is not just surprisingly prescient; it’s timeless.

Dialogues is about cybernetics, the science of systems. A system is any material arrangement that responds to environmental feedback. A steam engine is a mere mechanism, until you add the governor that controls its internal pressure. Then it becomes a system. When Lem was writing, systems thinking was meant to transform everything, conciliating between the physical sciences and the humanities to usher in a technocratic Utopia.

Enthusiastic as 1957-vintage Lem was, there is something deliciously levelling about how he introduces the cybernetic idea. We can bloviate all we like about using data and algorithms to create a better society; what drives Philonous and Hylas’s interest in these eight dialogues (modelled on Berkeley’s Three Dialogues of 1713) is Hylas’s desperate desire to elude Death. This new-fangled science of systems reimagines the world as information, and the thing about information is that it can be transmitted, stored and (best of all) copied. Why then can’t it transmit, store and copy poor Death-haunted Hylas?

Well, of course, that’s certainly do-able, Philonous agrees — though Hylas might find cybernetic immortality “grotesque, awkward, and disagreeable”. Sure enough, Hylas baulks at Philomous’s culminating vision of humanity immortalised in serried ranks of humming metal cabinets.

This image certainly was prescient: Cybernetics was supposed to be a philosophy, one that would profoundly change our understanding of the animate and inanimate world. The philosophy failed to catch on, but its insights created something utterly unexpected: the computer.

Dialogues is important now because it describes (or described, rather, more than half a century ago — you can almost hear Lem’s slow hand-clapping from the Beyond) all the ways we do not comprehend the world we have made.

Cybernetics teaches us that systems are animate. It doesn’t matter what a system is made from. Workers in an office, onse and zeroes clouding a chip, proteins folding and refolding in a living cell, string and pulleys in a playground: are all good building materials for systems, and once a system is up and running, it is no longer reducible to its parts. It’s a distinct, unified whole, shaped by its past history and actively coexisting with its environment, and exhibiting behavior that cannot be precisely predicted from its structure. “If you insist on calling this new system a mechanism,” Lem remarks, drily, “then you must apply that term to living beings as well.”

We’ve yet to grasp this nettle: that between the living and non-living worlds sits a world of systems, unalive yet animate. No wonder, lacking this insight, we spend half our lives sneering at the mechanisms we do understand (“Alexa, stop calling my Mum!”) and the other half on our knees, worshipping the mechanisms we don’t. (“It says here on Facebook…”) The very words we use — “artificial intelligence” indeed! — reveal the paucity of our understanding.

“Lem understood, as no-one then or since has understood, how undeserving of worship are the systems (be they military, industrial or social) that are already strong enough to determine our fate. A couple of years ago, around the time Hong Kong protesters were destroying facial recognition towers, a London pedestrian was fined £90 for hiding his face from an experimental Met camera. The consumer credit reporting company Experian uses machine learning to decide the financial trustworthiness of over a billion people. China’s Social Credit System (actually the least digitised of China’s surveillance systems) operates under multiple, often contradictory legal codes.

The point about Lem is not that he was terrifyingly smart (though he was that); it’s that he had skin in the game. He was largely self-taught, because he had to quit university after writing satirical pieces about Soviet poster-boy Trofim Lysenko (who denied the existence of genes). Before that, he was dodging Nazis in Lv’v (and mending their staff cars so that they would break down). In his essay “Applied Cybernetics: An Example from Sociology”, Lem uses the new-fangled science of systems to anatomise the Soviet thinking of his day, and from there, to explain how totalitarianism is conceived, spread and performed. Worth the price of the book in itself, this little essay is a tour de force of human sympathy and forensic fury, shorter than Solzhenitsyn, and much, much funnier than Hannah Arendt.

Peter Butko’s translations of the Dialogues, and the revisionist essays Lem added to the 1971 second edition, are as witty and playful as Lem’s allusive Polish prose demands. His endnotes are practically a book in themselves (and an entertaining one, too).

Translated so well, Lem needs no explanation, no contextualisation, no excuse-making. Lem’s expertise lay in technology, but his loyalty lay with people, in all their maddening tolerance for bad systems. “There is nothing easier than to create a state in which everyone claims to be completely satisfied,” he wrote; “being stretched on the bed, people would still insist — with sincerity — that their life is perfectly fine, and if there was any discomfort, the fault lay in their own bodies or in their nearest neighbor.”

 

Nuanced and terrifying at the same time

Reading The Drone Age by Michael J. Boyle for New Sceintist, 30 September 2020

Machines are only as good as the people who use them. Machines are neutral — just a faster, more efficient way of doing something that we always intended to do. That, anyway, is the argument wielded often by defenders of technology.

Michael Boyle, a professor of political science at LaSalle University in Philadelphia, isn’t buying: “the technology itself structures choices and induces changes in decision-making over time,” he explains, as he concludes his concise, comprehensive overview of the world the drone made. In everything from commerce to warfare, spycraft to disaster relief, our menu of choices “has been altered or constrained by drone technology itself”.

Boyle manages to be nuanced and terrifying at the same time. At one moment he’s pointing out the formidable practical obstacles in the way of anyone launching a major terrorist drone attack. In the next, he’s explaining why political assassinations by drone are just around the corner, Turn a page setting out the moral, operational and legal constraints keenly felt by upstanding US military drone pilots, and you’re confronted by their shadowy handlers in government, who operate with virtually no oversight.

Though grounded in just the right level of technical detail, The Drone Age describes, not so much the machines themselves, but the kind of thinking they’ve ushered in: an approach to problems that no longer distinguishes between peace and war.

In some ways this is a good thing. Assuming that war is inevitable, what’s not to welcome about a style of warfare that involves working through a kill list, rather than exterminating a significant proportion of the enemy’s population?
Well, two things. For US readers, there’s the way a few careful drone strikes proliferated under Obama and especially under Trump into a global counter-insurgency air platform. While for all of us, there’s the peacetime living is affected, too. “It is hard to feel like a human… when reduced to a pixelated dot under the gaze of a drone,” Boyle writes. If the pool of information gathered about us expands, but not the level of understanding or sympathy for us, where then i’s the positive for human society?

Boyle brings proper philosophical thinking to our relationship with technology. He’s particularly indebted to the French philosopher Jacques Ellul, whose The Technological Society (1964) transformed the way we think about machines. Ellul argued that when we apply technology to a problem, we adopt a mode of thinking that emphasizes efficiency and instrumental rationality, but also dehumanizes the problem.
Applying this lesson to drone technology, Boyle writes: “Instead of asking why we are using aircraft for a task in the first place, we tend to debate instead whether the drone is better than the manned alternative.”

This blinkered thinking, on the part of their operators, explains why drone activities almost invariably alienate the very people they are meant to benefit: non-combatants, people caught up in natural disasters, the relatively affluent denizens of major cities. Indeed, the drone’s ability to intimidate seems on balance to outweigh every other capability.

The UN has been known to fly unarmed Falco surveillance drones low to the ground to deter rebel groups from gathering. If you adopt the kind of thinking Ellul described, then this must be a good thing — a means of scattering hostels, achieved efficiently and safely. In reality, there’s no earthly reason to suppose violence has been avoided: only redistributed (and let’s not forget how Al Quaeda, decimated by constant drone strikes, has reinvented itself as a global internet brand).

Boyle warns us at the start that different models of drone vary so substantially “that they hardly look like the same technology”. And yet The Drone Age keeps this heterogenous flock of disruptive technologies together long enough to give it real historical and intellectual coherence. If you read one book about drones, this is the one. But it is just as valuable about surveillance, or the rise of information warfare, or the way the best intentions can turn the world we knew on its head.

Over-performing human

Talking to choreographer Alexander Whitley for the Financial Times,  28 February 2020

On a dim and empty stage, six masked black-clad dancers, half-visible, their limbs edged in light, run through attitude after attitude, emotion after emotion. Above the dancers, a long tube of white light slowly rises, falls, tips and circles, drawing the dancers’ limbs and faces towards itself like a magnet. Under its variable cold light, movements become more expressive, more laden with emotion, more violent.

Alexander Whitley, formerly of the Royal Ballet School and the Birmingham Royal Ballet, is six years into a project to expand the staging of dance with new media. He has collaborated with filmmakers, designers, digital artists and composers. Most of all, he has played games with light.

The experiments began with The Measures Taken, in 2014. Whitley used motion-tracking technology to project visuals that interacted with the performers’ movements. Then, dissatisfied with the way the projections obscured the dancers, in 2018 he used haze and narrowly focused bars of light to create, for Strange Stranger, a virtual “maze” in which his dancers found themselves alternately liberated and constrained.

At 70 minutes Overflow, commissioned by Sadler’s Wells Theatre, represents a massive leap in ambition. With several long-time collaborators — in particular the Dutch artist-designers Children of the Light — Whitley has worked out how to reveal, to an audience sat just a few feet away, exactly what he wants them to see.

Whitley is busy nursing Overflow up to speed in time for its spring tour. The company begin with a night at the Lowry in Salford on 18 March, before performing at Sadler’s Wells on 17 and 18 April.

Overflow, nearly two years in the making, has consumed money as well as time. The company is performing at Stereolux in Nantes in April and will need more overseas bookings if it is to flourish. “There’s serious doubt about the status of the UK and UK touring companies now,” says Whitley (snapping at my cheaply dangled Brexit bait); “I hope there’s enough common will to build relationships in spite of the political situation.”

It is easy to talk politics with Whitley (he is very well read), but his dances are anything but mere vehicles for ideas. And while Overflow is a political piece by any measure — a survey of our spiritual condition under survellance capitalism, for heaven’s sake — its effects are strikingly classical. It’s not just the tricksy lighting that has me thinking of the figures on ancient Greek vases. It’s the dancers themselves and their clean, elegant, tragedian’s gestures.

A dancer kneels, and takes hold of his head. He tilts it up into the light as it turns and tilts, inches from his face, and, in a shocking piece of trompe l’ioel — can he really be pulling his face apart?

Overflow is about our relationship to the machines that increasingly govern our lives. But there’s not a hint of regimentation here, or mechanisation. These dancers are not trying to perform machine. They’re trying to perform human.

Whitley laughs at this observation. “I guess, as far as that goes, they’re over-performing human. They’re caught up in the excitement and hyper-stimulation of their activity. Which is exactly how we interact with social media. We’re being hyperstimulated into excessive activity. Keep scrolling, keep consuming, keep engaging!”

It was an earlier piece, 2016’s Pattern Recognition, that set Whitley on the road to Overflow. “I’d decided to have the lights moving around the stage, to give us the sense of depth we’d struggled to achieve in The Measures Taken. But very few people I talked to afterwards realised or understood that our mobile stage lights were being driven by real-time tracking. They thought you could achieve what we’d achieved just through choreography. At which point a really obvious insight arrived: that interactivity is interesting, first and foremost, for the actor involved in the interaction.”

In Overflow, that the audience feels left out is no longer a technical problem: it’s the whole point of the piece. “We’re all watching things we shouldn’t be watching, somehow, through social media and the internet,” says Whitley. “That the world has become so revealed is unpleasant. It’s over-exposed us to elements of human nature that should perhaps remain private. But we’re all bound up in it. Even if we’re not doing it, we’re watching it.”

The movements of the ensemble in Overflow are the equivalent of emoji: “I was interested in how we could think of human emotions just as bits of data,” Whitley explains. In the 1980s a psychologist called Robert Plutchik stated that there were eight basic emotions: joy, trust, fear, surprise, sadness, anticipation, anger, and disgust. “We stuck pins at random into this wheel chart he invented, choosing an emotion at random, and from that creating an action that somehow embodied or represented it. And the incentive was to do so as quickly and concisely as possible, and as soon it’s done, choose another one. So the dancers are literally jumping at random between all these different human emotions. It’s not real communication, just an outpouring of emotional information.”

The solos are built using material drawn from each dancer’s movement diary. “The dancers made diary entries, which I then filmed, based on how they were feeling each day. They’re movement dairies: personal documents of their emotional lives, which I then chopped up and jumbled around and gave back to them as a video to learn.”

In Whitley’s vision, the digital realm isn’t George Orwell’s Big Brother, dictating our every move from above. It’s more like the fox and the cat in the Pinnochio story, egging a naive child into the worst behaviours, all in the name of independence and free expression. “Social media encourage us to act more, to feel more, to express more, because the more we do that, the more capital they can generate from our data, and the more they can understand and predict what we’re likely to do next.”

This is where the politics comes in: the way “emotion, which incidentally is the real currency of dance, is now the major currency of the digital economy”.

It’s been a job of work, packing such cerebral content into an emotional form like dance. But Whitley says it’s what keeps him working, ” that sheer impossibility of pinning down ideas that otherwise exist almost entirely in words. As soon as you scratch the surface, you realise there’s huge amount of communication always at work through the body and drawing ideas from a more cerebral world into the physical, into the emotional, is a constant fascination. There are lifetimes of enquiry here. It’s what keeps me coming back.”

“Intelligence is the wrong metaphor for what we’ve built”

Travelling From Apple to Anomaly, Trevor Paglen’s installation at the Barbican’s Curve gallery in London, for New Scientist, 9 October 2019

A COUPLE of days before the opening of Trevor Paglen’s latest photographic installation, From “Apple” to “Anomaly”, a related project by the artist found itself splashed all over the papers.

ImageNet Roulette is an online collaboration with artificial intelligence researcher Kate Crawford at New York University. The website invites you to provide an image of your face. An algorithm will then compare your face against a database called ImageNet and assign you to one or two of its 21,000 categories.

ImageNet has become one of the most influential visual data sets in the fields of deep learning and AI. Its creators at Stanford, Princeton and other US universities harvested more than 14 million photographs from photo upload sites and other internet sources, then had them manually categorised by some 25,000 workers on Amazon’s crowdsourcing labour site Mechanical Turk. ImageNet is widely used as a training data set for image-based AI systems and is the secret sauce within many key applications, from phone filters to medical imaging, biometrics and autonomous cars.

According to ImageNet Roulette, I look like a “political scientist” and a “historian”. Both descriptions are sort-of-accurate and highly flattering. I was impressed. Mind you, I’m a white man. We are all over the internet, and the neural net had plenty of “my sort” to go on.

Spare a thought for Guardian journalist Julia Carrie Wong, however. According to ImageNet Roulette she was a “gook” and a “slant-eye”. In its attempt to identify Wong’s “sort”, ImageNet Roulette had innocently turned up some racist labels.

From “Apple” to “Anomaly” also takes ImageNet to task. Paglen took a selection of 35,000 photos from ImageNet’s archive, printed them out and stuck them to the wall of the Curve gallery at the Barbican in London in a 50-metre-long collage.

The entry point is images labelled “apple” – a category that, unsurprisingly, yields mostly pictures of apples – but the piece then works through increasingly abstract and controversial categories such as “sister” and “racist”. (Among the “racists” are Roger Moore and Barack Obama; my guess is that being over-represented in a data set carries its own set of risks.) Paglen explains: “We can all look at an apple and call it by its name. An apple is an apple. But what about a noun like ‘sister’, which is a relational concept? What might seem like a simple idea – categorising objects or naming pictures – quickly becomes a process of judgement.”

The final category in the show is “anomaly”. There is, of course, no such thing as an anomaly in nature. Anomalies are simply things that don’t conform to the classification systems we set up.

Halfway along the vast, gallery-spanning collage of photographs, the slew of predominantly natural and environmental images peters out, replaced by human faces. Discrete labels here and there indicate which of ImageNet’s categories are being illustrated. At one point of transition, the group labelled “bottom feeder” consists entirely of headshots of media figures – there isn’t one aquatic creature in evidence.

Scanning From “Apple” to “Anomaly” gives gallery-goers many such unexpected, disconcerting insights into the way language parcels up the world. Sometimes, these threaten to undermine the piece itself. Passing seamlessly from “android” to “minibar”, one might suppose that we are passing from category to category according to the logic of a visual algorithm. After all, a metal man and a minibar are not so dissimilar. At other times – crossing from “coffee” to “poultry”, for example – the division between categories is sharp, leaving me unsure how we moved from one to another, and whose decision it was. Was some algorithm making an obscure connection between hens and beans?

Well, no: the categories were chosen and arranged by Paglen. Only the choice of images within each category was made by a trained neural network.

This set me wondering whether the ImageNet data set wasn’t simply being used as a foil for Paglen’s sense of mischief. Why else would a cheerleader dominate the “saboteur” category? And do all “divorce lawyers” really wear red ties?

This is a problem for art built around artificial intelligence: it can be hard to tell where the algorithm ends and the artist begins. Mind you, you could say the same about the entire AI field. “A lot of the ideology around AI, and what people imagine it can do, has to do with that simple word ‘intelligence’,” says Paglen, a US artist now based in Berlin, whose interest in computer vision and surveillance culture sprung from his academic career as a geographer. “Intelligence is the wrong metaphor for what we’ve built, but it’s one we’ve inherited from the 1960s.”

Paglen fears the way the word intelligence implies some kind of superhuman agency and infallibility to what are in essence giant statistical engines. “This is terribly dangerous,” he says, “and also very convenient for people trying to raise money to build all sorts of shoddy, ill-advised applications with it.”

Asked what concerns him more, intelligent machines or the people who use them, Paglen answers: “I worry about the people who make money from them. Artificial intelligence is not about making computers smart. It’s about extracting value from data, from images, from patterns of life. The point is not seeing. The point is to make money or to amplify power.”

It is a point by no means lost on a creator of ImageNet itself, Fei-Fei Li at Stanford University in California, who, when I spoke to Paglen, was in London to celebrate ImageNet’s 10th birthday at the Photographers’ Gallery. Far from being the face of predatory surveillance capitalism, Li leads efforts to correct the malevolent biases lurking in her creation. Wong, incidentally, won’t get that racist slur again, following ImageNet’s announcement that it was removing more than half of the 1.2 million pictures of people in its collection.

Paglen is sympathetic to the challenge Li faces. “We’re not normally aware of the very narrow parameters that are built into computer vision and artificial intelligence systems,” he says. His job as artist-cum-investigative reporter is, he says, to help reveal the failures and biases and forms of politics built into such systems.

Some might feel that such work feeds an easy and unexamined public paranoia. Peter Skomoroch, former principal data scientist at LinkedIn, thinks so. He calls ImageNet Roulette junk science, and wrote on Twitter: “Intentionally building a broken demo that gives bad results for shock value reminds me of Edison’s war of the currents.”

Paglen believes, on the contrary, that we have a long way to go before we are paranoid enough about the world we are creating.

Fifty years ago it was very difficult for marketing companies to get information about what kind of television shows you watched, what kinds of drinking habits you might have or how you drove your car. Now giant companies are trying to extract value from that information. “I think,” says Paglen, “that we’re going through something akin to England and Wales’s Inclosure Acts, when what had been de facto public spaces were fenced off by the state and by capital.”

Asking for it

Reading The Metric Society: On the Quantification of the Social by Steffen Mau (Polity Press) for the Times Literary Supplement, 30 April 2019 

Imagine Steffen Mau, a macrosociologist (he plays with numbers) at Humboldt University of Berlin, writing a book about information technology’s invasion of the social space. The very tools he uses are constantly interrupting him. His bibliographic software wants him to assign a star rating to every PDF he downloads. A paper-sharing site exhorts him repeatedly to improve his citation score (rather than his knowledge). In a manner that would be funny, were his underlying point not so serious, Mau records how his tools keep getting in the way of his job.

Why does Mau use these tools at all? Is he too good for a typewriter? Of course he is: the whole history of civilisation is the story of us getting as much information as possible out of our heads and onto other media. It’s why, nigh-on 5000 years ago, the Sumerians dreamt up the abacus. Thinking is expensive. How much easier to stop thinking, and rely on data records instead!

The Metric Society, is not a story of errors made, or of wrong paths taken. This is a story, superbly reduced to the chill essentials of an executive summary, of how human society is getting exactly what it’s always been asking for. The last couple of years have seen more than 100 US cities pledge to use evidence and data to improve their decision-making. In the UK, “What Works Centres”, first conceived in the 1990s, are now responsible for billions in funding. The acronyms grow more bellicose, the more obscure they become. In the UK, the Alliance for Useful Evidence (with funding from ESRC, Big Lottery and Nesta) champions the use of evidence in social policy and practice.

Mau describes the emergence of a society trapped in “data-driven perpetual stock-taking”, in which the new Juggernaut of auditability lays waste to creativity, production, and even simple efficiency. “The magic attraction of numbers and comparisons is simply irresistible,” Mau writes.

It’s understandable. Our first great system of digital abstraction, money, enabled a more efficient and less locally bound exchange of good and services, and introduced a certain level of rational competition into the world of work.

But look where money has led us! Capital is not the point here. Neither is capitalism. The point is our relationship with information. Amazon’s algorithms are sucking all the localism out of the retail system, to the point where whole high streets have vanished — and entire communities with them. Amazon is in part powered by the fatuous metricisation of social variety through systems of scores, rankings, likes, stars and grades, which are (not coincidentally) the methods by which social media structures — from clownish Twitter to China’s Orwellian Social Credit System — turn qualitative differences into quantitative inequalities.

Mau leaves us thoroughly in the lurch. He’s a diagnostician, not a snake-oil salesman, and his bedside manner is distinctly chilly. Dazzled by data, which have relieved us of the need to dream and imagine, we fight for space on the foothills of known territory. The peaks our imaginations might have trod — as a society, and as a species — tower above us, ignored.

Hot photography

Previewing an exhibition of photographs by Richard Mosse for New Scientist, 11 February 2017

Irish photographer Richard Mosse has come up with a novel way to inspire compassion for refugees. He presents them as drones might see them – as detailed heat maps, often shorn of expression, skin tone, and even clues to age and sex. Mosse’s subjects, captured in the Middle East, North Africa and Europe, don’t look back at us: the infrared camera renders their eyes as uniform black spaces.

Mosse has made a career out of repurposing photographic kit meant for military use. The images here show his subjects as seen, mostly at night, by a super-telephoto device designed for border and battlefield surveillance. Able to zoom in from 6 kilometres away, the camera anonymises them, making them strangely faceless even while their sweat, breath and sometimes blood circulation patterns are visible.

The results are almost closer to the nightmarish paintings of Hieronymus Bosch than the work of a documentary photographer. Making sense of them requires imagination and empathy: after all, this is how a smart weapon might see us.

Mosse came across his heat-mapping camera via a friend who worked on the BBC series Planet Earth. Legally classified as an advanced weapons system, the device is unwieldy and – with no user interface or handbook – difficult to use. But, working with cinematographer Trevor Tweeten, Mosse has managed to use it to make a 52-minute video. Incoming will wrap itself around visitors to the Curve Gallery at the Barbican arts centre in London from 15 February until 23 April.