“Fears about technology are fears about capitalism”

Reading How AI Will Change Your Life by Patrick Dixon and AI Snake Oil by Arvind Narayanan and Sayash Kapoor, for the Telegraph

According to Patrick Dixon, Arvind Narayanan and Sayash Kapoor, artificial intelligence will not bring about the end of the world. It isn’t even going to bring about the end of human civilisation. It’ll struggle even to take over our jobs. (If anything, signs point to a decrease in unemployment.)

Am I alone in feeling cheated here? In 2014, Stephen Hawking said we were doomed. A decade later, Elon Musk is saying much the same. Last year, Musk and other CEOs and scientists signed an open letter from the Future of Life Institute, demanding a pause on giant AI experiments.

But why listen to fiery warnings from the tech industry? Of 5,400 large IT projects (for instance, creating a large data warehouse for a bank) recorded by 2012 in a rolling database maintained by McKinsey, nearly half went over budget, and over half under-delivered. In How AI Will Change Your Life, author and business consultant Dixon remarks, “Such consistent failures on such a massive scale would never be tolerated in any other area of business.” Narayanan and Kapoor, both computer scientists, say that academics in this field are no better. “We probably shouldn’t care too much about what AI experts think about artificial general intelligence,” they write. “AI researchers have often spectacularly underestimated the difficulty of achieving AI milestones.”

These two very different books want you to see AI from inside the business. Dixon gives us plenty to think about: AI’s role in surveillance; AI’s role in intellectual freedom and copyright; AI’s role in warfare; AI’s role in human obsolescence – his exhaustive list runs to over two dozen chapters. Each of these debates matter, but we would be wrong to think that they are driven by, or were even about, technology at all. Again and again, they are issues of money: about how production gravitates towards automation to save labour costs; or about how AI tools are more often than not used to achieve imaginary efficiencies at the expense of the poor and the vulnerable. Why go to the trouble of policing poor neighbourhoods if the AI can simply round up the usual suspects? As the science-fiction writer Ted Chiang summed up in June 2023, “Fears about technology are fears about capitalism.”

As both books explain, there are three main flavours of artificial intelligence. Large language models power chatbots, of which GPT-4, Gemini and the like will be most familiar to readers. They are bullshitters, in the sense that they’re trained to produce plausible text, not accurate information, and so fall under philosopher Harry Frankfurt’s definition of bullshit as speech that is intended to persuade without regard for the truth. At the moment they work quite well, but wait a year or two: as the internet fills with AI-generated content, chatbots and their ilk will begin to regurgitate their own pabulum, and the human-facing internet will decouple from truth entirely.

Second, there are AI systems whose superior pattern-matching spots otherwise invisible correlations in large datasets. This ability is handy, going on miraculous, if you’re tackling significant, human problems. According to Dixon, for example, Klick Labs in Canada has developed a test that can diagnose Type 2 diabetes with over 85 per cent accuracy using just a few seconds of the patient’s voice. Such systems have proved less helpful, however, in Chicago. Narayanan and Kapoor report how, lured by promises of instant alerts to gun violence, the city poured nearly 49 million dollars into ShotSpotter, a system that has been questioned for its effectiveness after police fatally shot a 13-year-old boy in 2021.

Last of the three types is predictive AI: the least discussed, least successful, and – in the hands of the authors of AI Snake Oil (4 STARS) – by some way the most interesting. So far, we’ve encountered problems with AI’s proper working that are fixable, at least in principle. With bigger, better datasets – this is the promise – we can train AI to do better. Predictive AI systems are different. These are the ones that promise to find you the best new hires, flag students for dismissal before they start to flounder, and identify criminals before they commit criminal acts.

They won’t, however, because they can’t. Drawing broad conclusions about general populations is often the stuff of social science, and social science datasets tend to be small. But were you to have a big dataset about a group of people, would AI’s ability to say things about the group let it predict the behaviour of one of its individuals? The short answer is no. Individuals are chaotic in the same way as earthquakes are. It doesn’t matter how much you know about earthquakes; the one thing you’ll never know is where and when the next one will hit.

How AI Will Change Your Life is not so much a book as a digest of bullet points for a PowerPoint presentation. Business types will enjoy Dixon’s meticulous lists and his willingness to argue both sides against the middle. If you need to acquire instant AI mastery in time for your next board meeting, Dixon’s your man. Being a dilettante, I will stick with Narayanan and Kapoor, if only for this one-liner, which neatly captures our confused enthusiasm for little black boxes that promise the world. “It is,” they say, “as if everyone in the world has been given the equivalent of a free buzzsaw.”

 

 

Not even wrong

Reading Yuval Noah Harari’s Nexus for the Telegraph

In his memoirs, the German-British physicist Rudolf Peierls recalls the sighing response his colleague Wolfgang Pauli once gave to a scientific paper: “It is not even wrong.”

Some ideas are so incomplete, or so vague, that they can’t even be judged. Yuval Noah Harari’s books are notoriously full of such ideas. But then, given what Harari is trying to do, this may not matter very much.

Take this latest offering: a “brief history” that still finds room for viruses and Neanderthals, The Talmud and Elon Musk’s Neuralink and the Thirty Years’ War. Has Harari found a single rubric, under which to combine all human wisdom and not a little of its folly? Many a pub bore has entertained the same conceit. And Harari is tireless: “To appreciate the political ramifications of the mind–body problem,” Harari writes, “let’s briefly revisit the history of Christianity.” Harari is a writer who’s never off-topic but only because his topic is everything.

Root your criticism of Harari in this, and you’ve missed the point, which is that he’s writing this way on purpose. His single goal is to give you a taste of the links between things, without worrying too much about the things themselves. Any reader old enough to remember James Burke’s idiosyncratic BBC series Connections will recognise the formula, and know how much sheer joy and exhilaration it can bring to an audience that isn’t otherwise spending every waking hour grazing the “smart thinking” shelf at Waterstone’s.

Well-read people don’t need Harari.

Nexus’s argument goes like this: civilisations are (among other things) information networks. Totalitarian states centralise their information, which grows stale as a consequence. Democracies distribute their information, with checks and balances to keep the information fresh.

Harari’s key point here is that in neither case does the information have to be true. A great deal of it is not true. At best it’s intersubjectively true (Santa Claus, human rights and money are real by consensus: they have no basis in the material world.) Quite a lot of our information is fiction, and a fraction of that fiction is downright malicious falsehood.

It doesn’t matter to the network, which uses that information more or less agnostically, to establish order. Nor is this necessarily a problem, since an order based on truth is likely to be a lot more resilient and pleasant to live under than an order based on cultish blather.

This typology gives Harari the chance to wax lyrical over various social and cultural arrangements, historical and contemporary. Marxism and populism both get short shrift, in passages that are memorable, pithy, and, dare I say it, wise.

In the second half of the book, Harari invites us to stare like rabbits into the lights of the on-coming AI juggernaut. Artificial intelligence changes everything, Harari says, because just as human’s create inter-subjective realities, computers create inter-computer realities. Pokemon Go is an example of an intercomputer reality. So — rather more concerningly — are the money markets.

Humans disagree with each other all the time, and we’ve had millennia to practice thinking our way into other heads. The problem is that computers don’t have any heads. Their intelligence is quite unlike our own. We don’t know what They’re thinking because, by any reasonable measure, “thinking” does not describe what They are doing.

Even this might not be a problem, if only They would stop pretending to be human. Harari cites a 2022 study showing that the 5 per cent of Twitter users that are bots are generating between 20 and 30 per cent of the site’s content.

Harari quotes Daniel Dennett’s blindingly obvious point that, in a society where information is the new currency, we should ban fake humans the way we once banned fake coins.

And that is that, aside from the shouting — and there’s a fair bit of that in the last pages, futurology being a sinecure for people who are not even wrong.

Harari’s iconoclastic intellectual reputation is wholly undeserved, not because he does a bad job, but because he does such a superb job of being the opposite of an iconoclast. Harari sticks the world together in a gleaming shape that inspires and excites. If it holds only for as long as it takes to read the book, still, dazzled readers should feel themselves well served.

The most indirect critique of technology ever made?

Watching Bertrand Bonello’s The Beast for New Scientist

“Something or other lay in wait for him,” wrote Henry James in a story from 1903, ”amid the twists and turns of the months and the years, like a crouching beast in the jungle.”

The beast in this tale was (just to spoil it for you) fear itself, for it was fear that stopped our hero from living any kind of worthwhile life.

Swap around the genders of the couple at the heart of James’s bitter tale, allow them to reincarnate and meet as if for the first time on three separate occasions — in Paris in 1910, in LA in 2014 and in Chengdu in 2044 — and you’ve got a rough idea of the mechanics of Bertrand Bonello’s magnificent and maddening new science fiction film. Through a series of close-ups, longueurs and red-herrings, The Beast, while getting nowhere very fast, manages to be an utterly riveting, often terrifying film about love, the obstacles to love, and our deep-seated fear of love even when it’s there for the taking. It’s also (did I mention this?) an epic account of how everyone’s ordinary human timidity, once aggregated by technology, destroys the human race.

Léa Seydoux and George MacKay play star-crossed lovers Gabrielle Monnier and Louis Lewanski. In 1910 Gabrielle fudges the business of leaving her husband; tragedy strikes soon after. In 2014 an incel version of Louis would sooner stalk Gabrielle with a gun than try and talk to her. The consequences of their non-affair are not pretty. In 2044 Gabrielle and Louis stumble into each other on the way to “purification” — a psychosurgical procedure that heals past-life trauma and leaves people, if not without emotion, then certainly without the need for grand passion. By now the viewer is seriously beginning to wonder what will ever go right for this pair.

Somewhere in these twisty threaded timelines are the off-screen “events” of 2025, that brought matters to a head and convinced people to hand their governance over to machines. Why would humanity betray itself in such a manner? The blunt answer is: because we’re more in love with machines than with each other, and always have been.

In 1910 Gabrielle’s husband’s fortune is made from the manufacture of celluloid dolls. In 2014 — a point-perfect satire of runaway narcissism that owes much, stylistically, to the films of David Lynch — Gabrielle and Louis collide disastrously with warped images of themselves and each other, in an uncanny valley of cross-purposed conversations, predatory social media and manipulated video. In 2044 mere dolls and puppets have become fully conscious robots. One of these, played by Guslagie Malanda, even begins to fall in love with its “client” Gabrielle. Meanwhile Gabrielle, Louis and everyone else is undergoing psychosurgery in order to fit in with the AI’s brave new world. (Human unemployment is running at 67 per cent, and without purification’s calming effect it’s virtually impossible to get a worthwhile job.)

None of the Gabrielles and Louises are comfortable in their own skin. They take it in turns wanting to be something else, even if it means being something less. They see the best that they can be, and it pretty much literally scares the life out of them.

Given this is the point The Beast wants to put across, you have to admire the physical casting here. Each lead actor exhibits superb, machine-like self-control. Seydoux dies behind her eyes not once but many times in the course of this film; MacKay can go from trembling Adonis to store-front mannekin in about 2.1 seconds. And when full humanity is called for, both actors demonstrate extraordinary sensitivity: handy when you’re trying to distinguish between 1910’s unspoken passion, 2014’s unspeakable passion, and 2044’s passionless speech.

True, The Beast may be the most indirect critique of technology ever made. Heaven knows how it will fare at the box office. But any fool can make us afraid of robots. This intelligent, shocking and memorable film dares to focus on us.

Apocalypse Now Lite

Watching Gareth Edwards’s The Creator for New Scientist, 4 October 2023

A man loses his wife in the war with the robots. The machines didn’t kill her; human military ineptitude did. She was pregnant with his child. The man (played by John David Washington, whose heart-on-sleeve performance can’t quite pull this film out of the fire) has nothing to live for, until it turns out that his wife is alive and working with the robots to build a weapon. The weapon turns out to be a robot child (an irresistible performance by 7-year-old Madeleine Yuna Voyles) who possesses the ability to control machines at a distance. Man and weapon go in search of the man’s wife; they’re a family in wartime, trying to reconnect, and their reconnection will end the war and change everything.

The Creator’s great strength is its futuristic south-east Asian setting. (You know a film has problems when the reviewer launches straight in with the set design.) Police drones like mosquitos rumble overhead. Mantis-headed robots in red robes ring temple bells to warn of American air attack.

The Creator is Apocalypse Now Lite: the Americans aggressors have been traumatised by the nuking of Los Angeles — an atrocity they blame on their own AI. They’ve hurled their own robots into the garbage compactor (literally — a chilling up-scaled retread of that Star Wars scene). But South East Asia has had the temerity to fall in love with AI technology. They’re happy to be out-evolved! The way a unified, Blade-Runner-esque “New Asia” sees it, LA was an accident a long way away; people replace people all the time; and a robot is a person.

Hence: war. Hence: rural villages annihilated under blue laser light. Hence: missiles launched from space against temple complexes in mountain fastnesses. Hence: river towns reduced to matchwood under withering small-arms fire.

If nothing else, it’s spectacular.

The Creator is not so much a stand-alone sf blockbuster as a game of science fiction cinema bingo. Enormous battle tanks, as large as the villages they crush? think Avatar. A very-low-orbit space station, large enough to be visible in the daytime? think Oblivion. Child with special powers? think Stranger Things. The Creator is a science fiction movie assembled from the tropes of other science fiction movies. If it is not as bankrupt as Ridley Scott’s Alien prequels Prometheus and Covenant (now those were bad movies), it’s because we’ve not seen south-east Asia cyborgised before (though readers of sf have been inhabiting such futures for over thirty years) and also because director Gareth Edwards once again proves that he can pull warm human performances from actors lumbered with any amount of gear, sweating away on on the busiest, most cluttered and complex set.

This is not nothing. Nor, alas, is it enough.

As a film school graduate Gareth Edwards won a short sci-fi film contest in London, and got a once in a lifetime chance to make a low budget feature. Monsters (2010) managed to be both a character piece and a love story and a monster movie all in one. On the back of it he got a shot at a Star Wars spin-off in 2014, which hijacked the entire franchise (everyone loved Rogue One and its TV spin-off Andor is much admired; Disney’s own efforts at canon have mostly flopped).

The Creator should have been Edwards’s Star Wars. Instead, something horrible has happened in the editing. Vital lines are being delivered in scenes so truncated, it’s as though the actors are explaining the film directly to the audience. Every few minutes, tears run down Washington’s face, Voyles’s chin trembles, and we have no idea, none, what brought them to their latest crescendo — and ooh look, that goofy running bomb! That reminds me of Sky Captain and the World of Tomorrow…

The Creator is a fine spectacle. What we needed was a film that had something to say.

New flavours of intelligence

Reading about AI and governance for New Scientist, 13 September 2023

A sorcerer’s apprentice decides to use magic to help clean his master’s castle. The broom he enchants works well, dousing the floors with pails full of water. When the work is finished, the apprentice tries to stop the broom. Then, he tries to smash the broom. But the broom simply splits and regrows, working twice as hard as before, four times as hard, eight times as hard… until the rooms are awash and the apprentice all but drowns.

I wonder if Johann Wolfgang von Goethe’s 1797 poem sprang to mind as Mustafa Suleyman (co-founder of AI pioneers DeepMind, now CEO of Inflection AI) composed his new book, The Coming Wave? Or perhaps the shade of Robert Oppenheimer darkened Suleyman’s descriptions of artificial intelligence, and his own not insignificant role in its rise? “Decades after their invention,” he muses, “the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident.”

Suleyman and his peers, having launched artificially intelligent systems upon the world, are right to tremble. At one point Suleyman compares AI to “an evolutionary burst like the Cambrian explosion, the most intense eruption of new species in the Earth’s history.”

The Coming Wave is mostly about the destabilising effects of new technologies. It describes a wildly asymmetric world where a single quantum computer can render the world’s entire encryption infrastructure redundant, and an AI mapping new drugs can be repurposed to look for toxins at the press of a return key.

Extreme futures beckon: would you prefer subjection under an authoritarian surveillance state, or radical self-reliance in a world where “an array of assistants… when asked to create a school, a hospital, or an army, can make it happen in a realistic timeframe”?

The predatory city states dominating this latter, neo-Renaissance future may seem attractive to some. Suleyman is not so sure: “Renaissance would be great,” he writes; “unceasing war with tomorrow’s military technology, not so much.”

A third future possibility is infocalypse, “where the information ecosystem grounding knowledge, trust, and social cohesion… falls apart.”

We’ll come back to this.

As we navigate between these futures, we should stay focused on current challenges. “I’ve gone to countless meetings trying to raise questions about synthetic media and misinformation, or privacy, or lethal autonomous weapons,” Suleyman complains, “and instead spent the time answering esoteric questions from otherwise intelligent people about consciousness, the Singularity, and other matters irrelevant to our world right now.”

Historian David Runciman makes an analogous point in The Handover, an impressive (and impressively concise) history of the limited liability company and the modern nation state. The emergence of both “artificial agents” at the end of the 18th Century was, Runciman argues, “the first Singularity”, when we tied our individual fates to two distinct but compatible autonomous computational systems.

“These bodies and institutions have a lot more in common with robots than we might think,” argues Runciman. Our political systems are already radically artificial and autonomous, and if we fail to appreciate this, we won’t understand what to do, or what to fear, when they acquire new flavours of intelligence.

Long-lived, sustainable, dynamic states — ones with a healthy balance between political power and civil society — won’t keel over under the onslaught of helpful AI, Runciman predicts. They’ll embrace it, and grow increasingly automated and disconnected from human affairs. How will we ever escape this burgeoning machine utopia?

Well, human freedom may still be a force to reckon with, according to Igor Tulchinsky. Writing with Christopher Mason in The Age of Prediction, Tulchinsky explores why the more predictable world ushered in by AI may not necessarily turn out to be a safer one. Humans evolved to take risks, and weird incentives emerge whenever predictability increases and risk appears to decline.

Tulchinsky, a quant who analyzes the data flows in financial markets, and Mason, a geneticist who maps dynamics across human and microbial genomes, make odd bedfellows. Mason, reasonably enough, welcomes any advance that makes medicine more reliable. Tulchinsky fears lest perfect prediction in the markets renders humans as docile and demoralised as cattle. The authors’ spirited dialogue illuminates their detailed survey of what predictive technologies actually do, in theatres from warfare to recruitment, policing to politics.

Let’s say Tulchinsky and Mason are right, and that individual free will survives governance by all-seeing machines. It does not follow at all that human societies will survive their paternalistic attentions.

This was the unexpected sting in the tail delivered by Edward Geist in Deterrence under Uncertainty, a heavyweight but unexpectedly gripping examination of AI’s role in nuclear warfare.

Geist, steeped in the history and tradecraft of deception, reckons the smartest agent — be it meat or machine — can be rendered self-destructively stupid by an elegant bit of subterfuge. Fakery is so cheap, easy and effective, Geist envisions a future in which artificially intelligent “fog-of-war machines” create a world that favours neither beligerents nor conciliators, but deceivers: “those who seek to confound and mislead their rivals.”

In Geist’s hands, Suleyman’s “infocalypse” becomes a weapon, far cleaner and cheaper than any mere bomb. Imagine future wars fought entirely through mind games. In this world of shifting appearances, littered with bloody accidents and mutual misconstruals, people are persuaded that their adversary does not want to hurt them. Rather than living in fear of retaliation, they come to realise the adversary’s values are, and always have been, better than their own.
Depending on your interests, your politics, and your sensitivity to disinformation, you may well suspect that this particular infocalyptic future is already upon us.

And, says Geist, at his most Machiavellian (he is the most difficult of the writers here; also the most enjoyable): “would it not be much more preferable for one’s adversaries to decide one had been right all along, and welcome one’s triumph?”

 

The Art of Conjecturing

Reading Katy Börner’s Atlas of Forecasts: Modeling and mapping desirable futures for New Scientist, 18 August 2021

My leafy, fairly affluent corner of south London has a traffic congestion problem, and to solve it, there’s a plan to close certain roads. You can imagine the furore: the trunk of every kerbside tree sports a protest sign. How can shutting off roads improve traffic flows?

The German mathematician Dietrich Braess answered this one back in 1968, with a graph that kept track of travel times and densities for each road link, and distinguished between flows that are optimal for all cars, and flows optimised for each individual car.

On a Paradox of Traffic Planning is a fine example of how a mathematical model predicts and resolves a real-world problem.

This and over 1,300 other models, maps and forecasts feature in the references to Katy Börner’s latest atlas, which is the third to be derived from Indiana University’s traveling exhibit Places & Spaces: Mapping Science.

Atlas of Science: Visualizing What We Know (2010) revealed the power of maps in science; Atlas of Knowledge: Anyone Can Map (2015), focused on visualisation. In her third and final foray, Börner is out to show how models, maps and forecasts inform decision-making in education, science, technology, and policymaking. It’s a well-structured, heavyweight argument, supported by descriptions of over 300 model applications.

Some entries, like Bernard H. Porter’s Map of Physics of 1939, earn their place thanks purely to their beauty and for the insights they offer. Mostly, though, Börner chooses models that were applied in practice and made a positive difference.

Her historical range is impressive. We begin at equations (did you know Newton’s law of universal gravitation has been applied to human migration patterns and international trade?) and move through the centuries, tipping a wink to Jacob Bernoulli’s “The Art of Conjecturing” of 1713 (which introduced probability theory) and James Clerk Maxwell’s 1868 paper “On Governors” (an early gesture at cybernetics) until we arrive at our current era of massive computation and ever-more complex model building.

It’s here that interesting questions start to surface. To forecast the behaviour of complex systems, especially those which contain a human component, many current researchers reach for something called “agent-based modeling” (ABM) in which discrete autonomous agents interact with each other and with their common (digitally modelled) environment.

Heady stuff, no doubt. But, says Börner, “ABMs in general have very few analytical tools by which they can be studied, and often no backward sensitivity analysis can be performed because of the large number of parameters and dynamical rules involved.”

In other words, an ABM model offers the researcher an exquisitely detailed forecast, but no clear way of knowing why the model has drawn the conclusions it has — a risky state of affairs, given that all its data is ultimately provided by eccentric, foible-ridden human beings.

Börner’s sumptuous, detailed book tackles issues of error and bias head-on, but she left me tugging at a still bigger problem, represented by those irate protest signs smothering my neighbourhood.

If, over 50 years since the maths was published, reasonably wealthy, mostly well-educated people in comfortable surroundings have remained ignorant of how traffic flows work, what are the chances that the rest of us, industrious and preoccupied as we are, will ever really understand, or trust, all the many other models which increasingly dictate our civic life?

Borner argues that modelling data can counteract misinformation, tribalism, authoritarianism, demonization, and magical thinking.

I can’t for the life of me see how. Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” What happens when a model reaches such complexity, only an expert can really understand it, or when even the expert can’t be entirely sure why the forecast is saying what it’s saying?

We have enough difficulty understanding climate forecasts, let alone explaining them. To apply these technologies to the civic realm begs a host of problems that are nothing to do with the technology, and everything to do with whether anyone will be listening.

“Intelligence is the wrong metaphor for what we’ve built”

Travelling From Apple to Anomaly, Trevor Paglen’s installation at the Barbican’s Curve gallery in London, for New Scientist, 9 October 2019

A COUPLE of days before the opening of Trevor Paglen’s latest photographic installation, From “Apple” to “Anomaly”, a related project by the artist found itself splashed all over the papers.

ImageNet Roulette is an online collaboration with artificial intelligence researcher Kate Crawford at New York University. The website invites you to provide an image of your face. An algorithm will then compare your face against a database called ImageNet and assign you to one or two of its 21,000 categories.

ImageNet has become one of the most influential visual data sets in the fields of deep learning and AI. Its creators at Stanford, Princeton and other US universities harvested more than 14 million photographs from photo upload sites and other internet sources, then had them manually categorised by some 25,000 workers on Amazon’s crowdsourcing labour site Mechanical Turk. ImageNet is widely used as a training data set for image-based AI systems and is the secret sauce within many key applications, from phone filters to medical imaging, biometrics and autonomous cars.

According to ImageNet Roulette, I look like a “political scientist” and a “historian”. Both descriptions are sort-of-accurate and highly flattering. I was impressed. Mind you, I’m a white man. We are all over the internet, and the neural net had plenty of “my sort” to go on.

Spare a thought for Guardian journalist Julia Carrie Wong, however. According to ImageNet Roulette she was a “gook” and a “slant-eye”. In its attempt to identify Wong’s “sort”, ImageNet Roulette had innocently turned up some racist labels.

From “Apple” to “Anomaly” also takes ImageNet to task. Paglen took a selection of 35,000 photos from ImageNet’s archive, printed them out and stuck them to the wall of the Curve gallery at the Barbican in London in a 50-metre-long collage.

The entry point is images labelled “apple” – a category that, unsurprisingly, yields mostly pictures of apples – but the piece then works through increasingly abstract and controversial categories such as “sister” and “racist”. (Among the “racists” are Roger Moore and Barack Obama; my guess is that being over-represented in a data set carries its own set of risks.) Paglen explains: “We can all look at an apple and call it by its name. An apple is an apple. But what about a noun like ‘sister’, which is a relational concept? What might seem like a simple idea – categorising objects or naming pictures – quickly becomes a process of judgement.”

The final category in the show is “anomaly”. There is, of course, no such thing as an anomaly in nature. Anomalies are simply things that don’t conform to the classification systems we set up.

Halfway along the vast, gallery-spanning collage of photographs, the slew of predominantly natural and environmental images peters out, replaced by human faces. Discrete labels here and there indicate which of ImageNet’s categories are being illustrated. At one point of transition, the group labelled “bottom feeder” consists entirely of headshots of media figures – there isn’t one aquatic creature in evidence.

Scanning From “Apple” to “Anomaly” gives gallery-goers many such unexpected, disconcerting insights into the way language parcels up the world. Sometimes, these threaten to undermine the piece itself. Passing seamlessly from “android” to “minibar”, one might suppose that we are passing from category to category according to the logic of a visual algorithm. After all, a metal man and a minibar are not so dissimilar. At other times – crossing from “coffee” to “poultry”, for example – the division between categories is sharp, leaving me unsure how we moved from one to another, and whose decision it was. Was some algorithm making an obscure connection between hens and beans?

Well, no: the categories were chosen and arranged by Paglen. Only the choice of images within each category was made by a trained neural network.

This set me wondering whether the ImageNet data set wasn’t simply being used as a foil for Paglen’s sense of mischief. Why else would a cheerleader dominate the “saboteur” category? And do all “divorce lawyers” really wear red ties?

This is a problem for art built around artificial intelligence: it can be hard to tell where the algorithm ends and the artist begins. Mind you, you could say the same about the entire AI field. “A lot of the ideology around AI, and what people imagine it can do, has to do with that simple word ‘intelligence’,” says Paglen, a US artist now based in Berlin, whose interest in computer vision and surveillance culture sprung from his academic career as a geographer. “Intelligence is the wrong metaphor for what we’ve built, but it’s one we’ve inherited from the 1960s.”

Paglen fears the way the word intelligence implies some kind of superhuman agency and infallibility to what are in essence giant statistical engines. “This is terribly dangerous,” he says, “and also very convenient for people trying to raise money to build all sorts of shoddy, ill-advised applications with it.”

Asked what concerns him more, intelligent machines or the people who use them, Paglen answers: “I worry about the people who make money from them. Artificial intelligence is not about making computers smart. It’s about extracting value from data, from images, from patterns of life. The point is not seeing. The point is to make money or to amplify power.”

It is a point by no means lost on a creator of ImageNet itself, Fei-Fei Li at Stanford University in California, who, when I spoke to Paglen, was in London to celebrate ImageNet’s 10th birthday at the Photographers’ Gallery. Far from being the face of predatory surveillance capitalism, Li leads efforts to correct the malevolent biases lurking in her creation. Wong, incidentally, won’t get that racist slur again, following ImageNet’s announcement that it was removing more than half of the 1.2 million pictures of people in its collection.

Paglen is sympathetic to the challenge Li faces. “We’re not normally aware of the very narrow parameters that are built into computer vision and artificial intelligence systems,” he says. His job as artist-cum-investigative reporter is, he says, to help reveal the failures and biases and forms of politics built into such systems.

Some might feel that such work feeds an easy and unexamined public paranoia. Peter Skomoroch, former principal data scientist at LinkedIn, thinks so. He calls ImageNet Roulette junk science, and wrote on Twitter: “Intentionally building a broken demo that gives bad results for shock value reminds me of Edison’s war of the currents.”

Paglen believes, on the contrary, that we have a long way to go before we are paranoid enough about the world we are creating.

Fifty years ago it was very difficult for marketing companies to get information about what kind of television shows you watched, what kinds of drinking habits you might have or how you drove your car. Now giant companies are trying to extract value from that information. “I think,” says Paglen, “that we’re going through something akin to England and Wales’s Inclosure Acts, when what had been de facto public spaces were fenced off by the state and by capital.”

In Berlin: arctic AI, archeology, and robotic charades

Thanks (I assume) to the those indefatigable Head of Zeus people, who are even now getting my anthology We Robots ready for publication, I’m invited to this year’s Berlin International Literature Festival, to take part in Automatic Writing 2.0, a special programme devoted to the literary impact of artifical intelligence.

Amidst other mischief, on Sunday 15 September at 12:30pm I’ll be reading from a new story, The Overcast.

Attack of the Vocaloids

Marrying music and mathematics for The Spectator, 3 August 2019

In 1871, the polymath and computer pioneer Charles Babbage died at his home in Marylebone. The encyclopaedias have it that a urinary tract infection got him. In truth, his final hours were spent in an agony brought on by the performances of itinerant hurdy-gurdy players parked underneath his window.

I know how he felt. My flat, too, is drowning in something not quite like music. While my teenage daughter mixes beats using programs like GarageBand and Logic Pro, her younger brother is bopping through Helix Crush and My Singing Monsters — apps that treat composition itself as a kind of e-sport.

It was ever thus: or was once 18th-century Swiss watchmakers twigged that musical snuff-boxes might make them a few bob. And as each new mechanical innovation has emerged to ‘transform’ popular music, so the proponents of earlier technology have gnashed their teeth. This affords the rest of us a frisson of Schadenfreude.

‘We were musicians using computers,’ complained Pete Waterman, of the synthpop hit factory Stock Aitken Waterman in 2008, 20 years past his heyday. ‘Now it’s the whole story. It’s made people lazy. Technology has killed our industry.’ He was wrong, of course. Music and mechanics go together like beans on toast, the consequence of a closer-than-comfortable relation between music and mathematics. Today, a new, much more interesting kind of machine music is emerging to shape my children’s musical world, driven by non-linear algebra, statistics and generative adversarial networks — that slew of complex and specific mathematical tools we lump together under the modish (and inaccurate) label ‘artificial intelligence’.

Some now worry that artificially intelligent music-makers will take even more agency away from human players and listeners. I reckon they won’t, but I realise the burden of proof lies with me. Computers can already come up with pretty convincing melodies. Soon, argues venture capitalist Vinod Khosla, they will be analysing your brain, figuring out your harmonic likes and rhythmic dislikes, and composing songs made-to-measure. There are enough companies attempting to crack it; Popgun, Amper Music, Aiva, WaveAI, Amadeus Code, Humtap, HumOn, AI Music are all closing in on the composer-less composition.

The fear of tech taking over isn’t new. The Musicians’ Union tried to ban synths in the 1980s, anxious that string players would be put out of work. The big disruption came with the arrival of Kyoko Date. Released in 1996, she was the first seriously publicised attempt at a virtual pop idol. Humans still had to provide Date with her singing and speaking voice. But by 2004 Vocaloid software — developed by Kenmochi Hideki at the Pompeu Fabra University in Barcelona — enabled users to synthesise ‘singing’ by typing in lyrics and a melody. In 2016 Hatsune Miku, a Vocaloid-powered 16-year-old artificial girl with long, turquoise twintails, went, via hologram, on her first North American tour. It was a sell-out. Returning to her native Japan, she modelled Givenchy dresses for Vogue.

What kind of music were these idoru performing? Nothing good. While every other component of the music industry was galloping ahead into a brave new virtualised future — and into the arms of games-industry tech — the music itself seemed stuck in the early 1980s which, significantly, was when music synthesizer builder Dave Smith had first come up with MIDI.

MIDI is a way to represent musical notes in a form a computer can understand. MIDI is the reason discrete notes that fit in a grid dominate our contemporary musical experience. That maddenning clockwork-regular beat that all new music obeys is a MIDI artefact: the software becomes unwieldy and glitch-prone if you dare vary the tempo of your project. MIDI is a prime example (and, for that reason, made much of by internet pioneer-turned-apostate Jaron Lanier) of how a computer can take a good idea and throw it back at you as a set of unbreakable commandments.

For all their advances, the powerful software engines wielded by the entertainment industry were, as recently as 2016, hardly more than mechanical players of musical dice games of the sort popular throughout western Europe in the 18th century.

The original games used dice randomly to generate music from precomposed elements. They came with wonderful titles, too — witness C.P.E. Bach’s A method for making six bars of double counterpoint at the octave without knowing the rules (1758). One 1792 game produced by Mozart’s publisher Nikolaus Simrock in Berlin (it may have been Mozart’s work, but we’re not sure) used dice rolls randomly to select beats, producing a potential 46 quadrillion waltzes.

All these games relied on that unassailable, but frequently disregarded truth, that all music is algorithmic. If music is recognisable as music, then it exhibits a small number of formal structures and aspects that appear in every culture — repetition, expansion, hierarchical nesting, the production of self-similar relations. It’s as Igor Stravinsky said: ‘Musical form is close to mathematics — not perhaps to mathematics itself, but certainly to something like mathematical thinking and relationship.’

As both a musician and a mathematician, Marcus du Sautoy, whose book The Creativity Code was published this year, stands to lose a lot if a new breed of ‘artificially intelligent’ machines live up to their name and start doing his mathematical and musical thinking for him. But the reality of artificial creativity, he has found, is rather more nuanced.

One project that especially engages du Sautoy’s interest is Continuator by François Pachet, a composer, computer scientist and, as of 2017, director of the Spotify Creator Technology Research Lab. Continuator is a musical instrument that learns and interactively plays with musicians in real time. Du Sautoy has seen the system in action: ‘One musician said, I recognise that world, that is my world, but the machine’s doing things that I’ve never done before and I never realised were part of my sound world until now.’

The ability of machine intelligences to reveal what we didn’t know we knew is one of the strangest and most exciting developments du Sautoy detects in AI. ‘I compare it to crouching in the corner of a room because that’s where the light is,’ he explains. ‘That’s where we are on our own. But the room we inhabit is huge, and AI might actually help to illuminate parts of it that haven’t been explored before.’

Du Sautoy dismisses the idea that this new kind of collaborative music will be ‘mechanical’. Behaving mechanically, he points out, isn’t the exclusive preserve of machines. ‘People start behaving like machines when they get stuck in particular ways of doing things. My hope is that the AI might actually stop us behaving like machines, by showing us new areas to explore.’

Du Sautoy is further encouraged by how those much-hyped ‘AIs’ actually work. And let’s be clear: they do not expand our horizons by thinking better than we do. Nor, in fact, do they think at all. They churn.

‘One of the troubles with machine-learning is that you need huge swaths of data,’ he explains. ‘Machine image recognition is hugely impressive, because there are a lot of images on the internet to learn from. The digital environment is full of cats; consequently, machines have got really good at spotting cats. So one thing which might protect great art is the paucity of data. Thanks to his interminable chorales, Bach provides a toe-hold for machine imitators. But there may simply not be enough Bartok or Brahms or Beethoven for them to learn on.’

There is, of course, the possibility that one day the machines will start learning from each other. Channelling Marshall McLuhan, the curator Hans Ulrich Obrist has argued that art is an early-warning system for the moment true machine consciousness arises (if it ever does arise).

Du Sautoy agrees. ‘I think it will be in the world of art, rather than in the world of technology, that we’ll see machines first express themselves in a way that is original and interesting,’ he says. ‘When a machine acquires an internal world, it’ll have something to say for itself. Then music is going to be a very important way for us to understand what’s going on in there.’