Apocalypse Now Lite

Watching Gareth Edwards’s The Creator for New Scientist, 4 October 2023

A man loses his wife in the war with the robots. The machines didn’t kill her; human military ineptitude did. She was pregnant with his child. The man (played by John David Washington, whose heart-on-sleeve performance can’t quite pull this film out of the fire) has nothing to live for, until it turns out that his wife is alive and working with the robots to build a weapon. The weapon turns out to be a robot child (an irresistible performance by 7-year-old Madeleine Yuna Voyles) who possesses the ability to control machines at a distance. Man and weapon go in search of the man’s wife; they’re a family in wartime, trying to reconnect, and their reconnection will end the war and change everything.

The Creator’s great strength is its futuristic south-east Asian setting. (You know a film has problems when the reviewer launches straight in with the set design.) Police drones like mosquitos rumble overhead. Mantis-headed robots in red robes ring temple bells to warn of American air attack.

The Creator is Apocalypse Now Lite: the Americans aggressors have been traumatised by the nuking of Los Angeles — an atrocity they blame on their own AI. They’ve hurled their own robots into the garbage compactor (literally — a chilling up-scaled retread of that Star Wars scene). But South East Asia has had the temerity to fall in love with AI technology. They’re happy to be out-evolved! The way a unified, Blade-Runner-esque “New Asia” sees it, LA was an accident a long way away; people replace people all the time; and a robot is a person.

Hence: war. Hence: rural villages annihilated under blue laser light. Hence: missiles launched from space against temple complexes in mountain fastnesses. Hence: river towns reduced to matchwood under withering small-arms fire.

If nothing else, it’s spectacular.

The Creator is not so much a stand-alone sf blockbuster as a game of science fiction cinema bingo. Enormous battle tanks, as large as the villages they crush? think Avatar. A very-low-orbit space station, large enough to be visible in the daytime? think Oblivion. Child with special powers? think Stranger Things. The Creator is a science fiction movie assembled from the tropes of other science fiction movies. If it is not as bankrupt as Ridley Scott’s Alien prequels Prometheus and Covenant (now those were bad movies), it’s because we’ve not seen south-east Asia cyborgised before (though readers of sf have been inhabiting such futures for over thirty years) and also because director Gareth Edwards once again proves that he can pull warm human performances from actors lumbered with any amount of gear, sweating away on on the busiest, most cluttered and complex set.

This is not nothing. Nor, alas, is it enough.

As a film school graduate Gareth Edwards won a short sci-fi film contest in London, and got a once in a lifetime chance to make a low budget feature. Monsters (2010) managed to be both a character piece and a love story and a monster movie all in one. On the back of it he got a shot at a Star Wars spin-off in 2014, which hijacked the entire franchise (everyone loved Rogue One and its TV spin-off Andor is much admired; Disney’s own efforts at canon have mostly flopped).

The Creator should have been Edwards’s Star Wars. Instead, something horrible has happened in the editing. Vital lines are being delivered in scenes so truncated, it’s as though the actors are explaining the film directly to the audience. Every few minutes, tears run down Washington’s face, Voyles’s chin trembles, and we have no idea, none, what brought them to their latest crescendo — and ooh look, that goofy running bomb! That reminds me of Sky Captain and the World of Tomorrow…

The Creator is a fine spectacle. What we needed was a film that had something to say.

New flavours of intelligence

Reading about AI and governance for New Scientist, 13 September 2023

A sorcerer’s apprentice decides to use magic to help clean his master’s castle. The broom he enchants works well, dousing the floors with pails full of water. When the work is finished, the apprentice tries to stop the broom. Then, he tries to smash the broom. But the broom simply splits and regrows, working twice as hard as before, four times as hard, eight times as hard… until the rooms are awash and the apprentice all but drowns.

I wonder if Johann Wolfgang von Goethe’s 1797 poem sprang to mind as Mustafa Suleyman (co-founder of AI pioneers DeepMind, now CEO of Inflection AI) composed his new book, The Coming Wave? Or perhaps the shade of Robert Oppenheimer darkened Suleyman’s descriptions of artificial intelligence, and his own not insignificant role in its rise? “Decades after their invention,” he muses, “the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident.”

Suleyman and his peers, having launched artificially intelligent systems upon the world, are right to tremble. At one point Suleyman compares AI to “an evolutionary burst like the Cambrian explosion, the most intense eruption of new species in the Earth’s history.”

The Coming Wave is mostly about the destabilising effects of new technologies. It describes a wildly asymmetric world where a single quantum computer can render the world’s entire encryption infrastructure redundant, and an AI mapping new drugs can be repurposed to look for toxins at the press of a return key.

Extreme futures beckon: would you prefer subjection under an authoritarian surveillance state, or radical self-reliance in a world where “an array of assistants… when asked to create a school, a hospital, or an army, can make it happen in a realistic timeframe”?

The predatory city states dominating this latter, neo-Renaissance future may seem attractive to some. Suleyman is not so sure: “Renaissance would be great,” he writes; “unceasing war with tomorrow’s military technology, not so much.”

A third future possibility is infocalypse, “where the information ecosystem grounding knowledge, trust, and social cohesion… falls apart.”

We’ll come back to this.

As we navigate between these futures, we should stay focused on current challenges. “I’ve gone to countless meetings trying to raise questions about synthetic media and misinformation, or privacy, or lethal autonomous weapons,” Suleyman complains, “and instead spent the time answering esoteric questions from otherwise intelligent people about consciousness, the Singularity, and other matters irrelevant to our world right now.”

Historian David Runciman makes an analogous point in The Handover, an impressive (and impressively concise) history of the limited liability company and the modern nation state. The emergence of both “artificial agents” at the end of the 18th Century was, Runciman argues, “the first Singularity”, when we tied our individual fates to two distinct but compatible autonomous computational systems.

“These bodies and institutions have a lot more in common with robots than we might think,” argues Runciman. Our political systems are already radically artificial and autonomous, and if we fail to appreciate this, we won’t understand what to do, or what to fear, when they acquire new flavours of intelligence.

Long-lived, sustainable, dynamic states — ones with a healthy balance between political power and civil society — won’t keel over under the onslaught of helpful AI, Runciman predicts. They’ll embrace it, and grow increasingly automated and disconnected from human affairs. How will we ever escape this burgeoning machine utopia?

Well, human freedom may still be a force to reckon with, according to Igor Tulchinsky. Writing with Christopher Mason in The Age of Prediction, Tulchinsky explores why the more predictable world ushered in by AI may not necessarily turn out to be a safer one. Humans evolved to take risks, and weird incentives emerge whenever predictability increases and risk appears to decline.

Tulchinsky, a quant who analyzes the data flows in financial markets, and Mason, a geneticist who maps dynamics across human and microbial genomes, make odd bedfellows. Mason, reasonably enough, welcomes any advance that makes medicine more reliable. Tulchinsky fears lest perfect prediction in the markets renders humans as docile and demoralised as cattle. The authors’ spirited dialogue illuminates their detailed survey of what predictive technologies actually do, in theatres from warfare to recruitment, policing to politics.

Let’s say Tulchinsky and Mason are right, and that individual free will survives governance by all-seeing machines. It does not follow at all that human societies will survive their paternalistic attentions.

This was the unexpected sting in the tail delivered by Edward Geist in Deterrence under Uncertainty, a heavyweight but unexpectedly gripping examination of AI’s role in nuclear warfare.

Geist, steeped in the history and tradecraft of deception, reckons the smartest agent — be it meat or machine — can be rendered self-destructively stupid by an elegant bit of subterfuge. Fakery is so cheap, easy and effective, Geist envisions a future in which artificially intelligent “fog-of-war machines” create a world that favours neither beligerents nor conciliators, but deceivers: “those who seek to confound and mislead their rivals.”

In Geist’s hands, Suleyman’s “infocalypse” becomes a weapon, far cleaner and cheaper than any mere bomb. Imagine future wars fought entirely through mind games. In this world of shifting appearances, littered with bloody accidents and mutual misconstruals, people are persuaded that their adversary does not want to hurt them. Rather than living in fear of retaliation, they come to realise the adversary’s values are, and always have been, better than their own.
Depending on your interests, your politics, and your sensitivity to disinformation, you may well suspect that this particular infocalyptic future is already upon us.

And, says Geist, at his most Machiavellian (he is the most difficult of the writers here; also the most enjoyable): “would it not be much more preferable for one’s adversaries to decide one had been right all along, and welcome one’s triumph?”

 

The Art of Conjecturing

Reading Katy Börner’s Atlas of Forecasts: Modeling and mapping desirable futures for New Scientist, 18 August 2021

My leafy, fairly affluent corner of south London has a traffic congestion problem, and to solve it, there’s a plan to close certain roads. You can imagine the furore: the trunk of every kerbside tree sports a protest sign. How can shutting off roads improve traffic flows?

The German mathematician Dietrich Braess answered this one back in 1968, with a graph that kept track of travel times and densities for each road link, and distinguished between flows that are optimal for all cars, and flows optimised for each individual car.

On a Paradox of Traffic Planning is a fine example of how a mathematical model predicts and resolves a real-world problem.

This and over 1,300 other models, maps and forecasts feature in the references to Katy Börner’s latest atlas, which is the third to be derived from Indiana University’s traveling exhibit Places & Spaces: Mapping Science.

Atlas of Science: Visualizing What We Know (2010) revealed the power of maps in science; Atlas of Knowledge: Anyone Can Map (2015), focused on visualisation. In her third and final foray, Börner is out to show how models, maps and forecasts inform decision-making in education, science, technology, and policymaking. It’s a well-structured, heavyweight argument, supported by descriptions of over 300 model applications.

Some entries, like Bernard H. Porter’s Map of Physics of 1939, earn their place thanks purely to their beauty and for the insights they offer. Mostly, though, Börner chooses models that were applied in practice and made a positive difference.

Her historical range is impressive. We begin at equations (did you know Newton’s law of universal gravitation has been applied to human migration patterns and international trade?) and move through the centuries, tipping a wink to Jacob Bernoulli’s “The Art of Conjecturing” of 1713 (which introduced probability theory) and James Clerk Maxwell’s 1868 paper “On Governors” (an early gesture at cybernetics) until we arrive at our current era of massive computation and ever-more complex model building.

It’s here that interesting questions start to surface. To forecast the behaviour of complex systems, especially those which contain a human component, many current researchers reach for something called “agent-based modeling” (ABM) in which discrete autonomous agents interact with each other and with their common (digitally modelled) environment.

Heady stuff, no doubt. But, says Börner, “ABMs in general have very few analytical tools by which they can be studied, and often no backward sensitivity analysis can be performed because of the large number of parameters and dynamical rules involved.”

In other words, an ABM model offers the researcher an exquisitely detailed forecast, but no clear way of knowing why the model has drawn the conclusions it has — a risky state of affairs, given that all its data is ultimately provided by eccentric, foible-ridden human beings.

Börner’s sumptuous, detailed book tackles issues of error and bias head-on, but she left me tugging at a still bigger problem, represented by those irate protest signs smothering my neighbourhood.

If, over 50 years since the maths was published, reasonably wealthy, mostly well-educated people in comfortable surroundings have remained ignorant of how traffic flows work, what are the chances that the rest of us, industrious and preoccupied as we are, will ever really understand, or trust, all the many other models which increasingly dictate our civic life?

Borner argues that modelling data can counteract misinformation, tribalism, authoritarianism, demonization, and magical thinking.

I can’t for the life of me see how. Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” What happens when a model reaches such complexity, only an expert can really understand it, or when even the expert can’t be entirely sure why the forecast is saying what it’s saying?

We have enough difficulty understanding climate forecasts, let alone explaining them. To apply these technologies to the civic realm begs a host of problems that are nothing to do with the technology, and everything to do with whether anyone will be listening.

“Intelligence is the wrong metaphor for what we’ve built”

Travelling From Apple to Anomaly, Trevor Paglen’s installation at the Barbican’s Curve gallery in London, for New Scientist, 9 October 2019

A COUPLE of days before the opening of Trevor Paglen’s latest photographic installation, From “Apple” to “Anomaly”, a related project by the artist found itself splashed all over the papers.

ImageNet Roulette is an online collaboration with artificial intelligence researcher Kate Crawford at New York University. The website invites you to provide an image of your face. An algorithm will then compare your face against a database called ImageNet and assign you to one or two of its 21,000 categories.

ImageNet has become one of the most influential visual data sets in the fields of deep learning and AI. Its creators at Stanford, Princeton and other US universities harvested more than 14 million photographs from photo upload sites and other internet sources, then had them manually categorised by some 25,000 workers on Amazon’s crowdsourcing labour site Mechanical Turk. ImageNet is widely used as a training data set for image-based AI systems and is the secret sauce within many key applications, from phone filters to medical imaging, biometrics and autonomous cars.

According to ImageNet Roulette, I look like a “political scientist” and a “historian”. Both descriptions are sort-of-accurate and highly flattering. I was impressed. Mind you, I’m a white man. We are all over the internet, and the neural net had plenty of “my sort” to go on.

Spare a thought for Guardian journalist Julia Carrie Wong, however. According to ImageNet Roulette she was a “gook” and a “slant-eye”. In its attempt to identify Wong’s “sort”, ImageNet Roulette had innocently turned up some racist labels.

From “Apple” to “Anomaly” also takes ImageNet to task. Paglen took a selection of 35,000 photos from ImageNet’s archive, printed them out and stuck them to the wall of the Curve gallery at the Barbican in London in a 50-metre-long collage.

The entry point is images labelled “apple” – a category that, unsurprisingly, yields mostly pictures of apples – but the piece then works through increasingly abstract and controversial categories such as “sister” and “racist”. (Among the “racists” are Roger Moore and Barack Obama; my guess is that being over-represented in a data set carries its own set of risks.) Paglen explains: “We can all look at an apple and call it by its name. An apple is an apple. But what about a noun like ‘sister’, which is a relational concept? What might seem like a simple idea – categorising objects or naming pictures – quickly becomes a process of judgement.”

The final category in the show is “anomaly”. There is, of course, no such thing as an anomaly in nature. Anomalies are simply things that don’t conform to the classification systems we set up.

Halfway along the vast, gallery-spanning collage of photographs, the slew of predominantly natural and environmental images peters out, replaced by human faces. Discrete labels here and there indicate which of ImageNet’s categories are being illustrated. At one point of transition, the group labelled “bottom feeder” consists entirely of headshots of media figures – there isn’t one aquatic creature in evidence.

Scanning From “Apple” to “Anomaly” gives gallery-goers many such unexpected, disconcerting insights into the way language parcels up the world. Sometimes, these threaten to undermine the piece itself. Passing seamlessly from “android” to “minibar”, one might suppose that we are passing from category to category according to the logic of a visual algorithm. After all, a metal man and a minibar are not so dissimilar. At other times – crossing from “coffee” to “poultry”, for example – the division between categories is sharp, leaving me unsure how we moved from one to another, and whose decision it was. Was some algorithm making an obscure connection between hens and beans?

Well, no: the categories were chosen and arranged by Paglen. Only the choice of images within each category was made by a trained neural network.

This set me wondering whether the ImageNet data set wasn’t simply being used as a foil for Paglen’s sense of mischief. Why else would a cheerleader dominate the “saboteur” category? And do all “divorce lawyers” really wear red ties?

This is a problem for art built around artificial intelligence: it can be hard to tell where the algorithm ends and the artist begins. Mind you, you could say the same about the entire AI field. “A lot of the ideology around AI, and what people imagine it can do, has to do with that simple word ‘intelligence’,” says Paglen, a US artist now based in Berlin, whose interest in computer vision and surveillance culture sprung from his academic career as a geographer. “Intelligence is the wrong metaphor for what we’ve built, but it’s one we’ve inherited from the 1960s.”

Paglen fears the way the word intelligence implies some kind of superhuman agency and infallibility to what are in essence giant statistical engines. “This is terribly dangerous,” he says, “and also very convenient for people trying to raise money to build all sorts of shoddy, ill-advised applications with it.”

Asked what concerns him more, intelligent machines or the people who use them, Paglen answers: “I worry about the people who make money from them. Artificial intelligence is not about making computers smart. It’s about extracting value from data, from images, from patterns of life. The point is not seeing. The point is to make money or to amplify power.”

It is a point by no means lost on a creator of ImageNet itself, Fei-Fei Li at Stanford University in California, who, when I spoke to Paglen, was in London to celebrate ImageNet’s 10th birthday at the Photographers’ Gallery. Far from being the face of predatory surveillance capitalism, Li leads efforts to correct the malevolent biases lurking in her creation. Wong, incidentally, won’t get that racist slur again, following ImageNet’s announcement that it was removing more than half of the 1.2 million pictures of people in its collection.

Paglen is sympathetic to the challenge Li faces. “We’re not normally aware of the very narrow parameters that are built into computer vision and artificial intelligence systems,” he says. His job as artist-cum-investigative reporter is, he says, to help reveal the failures and biases and forms of politics built into such systems.

Some might feel that such work feeds an easy and unexamined public paranoia. Peter Skomoroch, former principal data scientist at LinkedIn, thinks so. He calls ImageNet Roulette junk science, and wrote on Twitter: “Intentionally building a broken demo that gives bad results for shock value reminds me of Edison’s war of the currents.”

Paglen believes, on the contrary, that we have a long way to go before we are paranoid enough about the world we are creating.

Fifty years ago it was very difficult for marketing companies to get information about what kind of television shows you watched, what kinds of drinking habits you might have or how you drove your car. Now giant companies are trying to extract value from that information. “I think,” says Paglen, “that we’re going through something akin to England and Wales’s Inclosure Acts, when what had been de facto public spaces were fenced off by the state and by capital.”

In Berlin: arctic AI, archeology, and robotic charades

Thanks (I assume) to the those indefatigable Head of Zeus people, who are even now getting my anthology We Robots ready for publication, I’m invited to this year’s Berlin International Literature Festival, to take part in Automatic Writing 2.0, a special programme devoted to the literary impact of artifical intelligence.

Amidst other mischief, on Sunday 15 September at 12:30pm I’ll be reading from a new story, The Overcast.

Attack of the Vocaloids

Marrying music and mathematics for The Spectator, 3 August 2019

In 1871, the polymath and computer pioneer Charles Babbage died at his home in Marylebone. The encyclopaedias have it that a urinary tract infection got him. In truth, his final hours were spent in an agony brought on by the performances of itinerant hurdy-gurdy players parked underneath his window.

I know how he felt. My flat, too, is drowning in something not quite like music. While my teenage daughter mixes beats using programs like GarageBand and Logic Pro, her younger brother is bopping through Helix Crush and My Singing Monsters — apps that treat composition itself as a kind of e-sport.

It was ever thus: or was once 18th-century Swiss watchmakers twigged that musical snuff-boxes might make them a few bob. And as each new mechanical innovation has emerged to ‘transform’ popular music, so the proponents of earlier technology have gnashed their teeth. This affords the rest of us a frisson of Schadenfreude.

‘We were musicians using computers,’ complained Pete Waterman, of the synthpop hit factory Stock Aitken Waterman in 2008, 20 years past his heyday. ‘Now it’s the whole story. It’s made people lazy. Technology has killed our industry.’ He was wrong, of course. Music and mechanics go together like beans on toast, the consequence of a closer-than-comfortable relation between music and mathematics. Today, a new, much more interesting kind of machine music is emerging to shape my children’s musical world, driven by non-linear algebra, statistics and generative adversarial networks — that slew of complex and specific mathematical tools we lump together under the modish (and inaccurate) label ‘artificial intelligence’.

Some now worry that artificially intelligent music-makers will take even more agency away from human players and listeners. I reckon they won’t, but I realise the burden of proof lies with me. Computers can already come up with pretty convincing melodies. Soon, argues venture capitalist Vinod Khosla, they will be analysing your brain, figuring out your harmonic likes and rhythmic dislikes, and composing songs made-to-measure. There are enough companies attempting to crack it; Popgun, Amper Music, Aiva, WaveAI, Amadeus Code, Humtap, HumOn, AI Music are all closing in on the composer-less composition.

The fear of tech taking over isn’t new. The Musicians’ Union tried to ban synths in the 1980s, anxious that string players would be put out of work. The big disruption came with the arrival of Kyoko Date. Released in 1996, she was the first seriously publicised attempt at a virtual pop idol. Humans still had to provide Date with her singing and speaking voice. But by 2004 Vocaloid software — developed by Kenmochi Hideki at the Pompeu Fabra University in Barcelona — enabled users to synthesise ‘singing’ by typing in lyrics and a melody. In 2016 Hatsune Miku, a Vocaloid-powered 16-year-old artificial girl with long, turquoise twintails, went, via hologram, on her first North American tour. It was a sell-out. Returning to her native Japan, she modelled Givenchy dresses for Vogue.

What kind of music were these idoru performing? Nothing good. While every other component of the music industry was galloping ahead into a brave new virtualised future — and into the arms of games-industry tech — the music itself seemed stuck in the early 1980s which, significantly, was when music synthesizer builder Dave Smith had first come up with MIDI.

MIDI is a way to represent musical notes in a form a computer can understand. MIDI is the reason discrete notes that fit in a grid dominate our contemporary musical experience. That maddenning clockwork-regular beat that all new music obeys is a MIDI artefact: the software becomes unwieldy and glitch-prone if you dare vary the tempo of your project. MIDI is a prime example (and, for that reason, made much of by internet pioneer-turned-apostate Jaron Lanier) of how a computer can take a good idea and throw it back at you as a set of unbreakable commandments.

For all their advances, the powerful software engines wielded by the entertainment industry were, as recently as 2016, hardly more than mechanical players of musical dice games of the sort popular throughout western Europe in the 18th century.

The original games used dice randomly to generate music from precomposed elements. They came with wonderful titles, too — witness C.P.E. Bach’s A method for making six bars of double counterpoint at the octave without knowing the rules (1758). One 1792 game produced by Mozart’s publisher Nikolaus Simrock in Berlin (it may have been Mozart’s work, but we’re not sure) used dice rolls randomly to select beats, producing a potential 46 quadrillion waltzes.

All these games relied on that unassailable, but frequently disregarded truth, that all music is algorithmic. If music is recognisable as music, then it exhibits a small number of formal structures and aspects that appear in every culture — repetition, expansion, hierarchical nesting, the production of self-similar relations. It’s as Igor Stravinsky said: ‘Musical form is close to mathematics — not perhaps to mathematics itself, but certainly to something like mathematical thinking and relationship.’

As both a musician and a mathematician, Marcus du Sautoy, whose book The Creativity Code was published this year, stands to lose a lot if a new breed of ‘artificially intelligent’ machines live up to their name and start doing his mathematical and musical thinking for him. But the reality of artificial creativity, he has found, is rather more nuanced.

One project that especially engages du Sautoy’s interest is Continuator by François Pachet, a composer, computer scientist and, as of 2017, director of the Spotify Creator Technology Research Lab. Continuator is a musical instrument that learns and interactively plays with musicians in real time. Du Sautoy has seen the system in action: ‘One musician said, I recognise that world, that is my world, but the machine’s doing things that I’ve never done before and I never realised were part of my sound world until now.’

The ability of machine intelligences to reveal what we didn’t know we knew is one of the strangest and most exciting developments du Sautoy detects in AI. ‘I compare it to crouching in the corner of a room because that’s where the light is,’ he explains. ‘That’s where we are on our own. But the room we inhabit is huge, and AI might actually help to illuminate parts of it that haven’t been explored before.’

Du Sautoy dismisses the idea that this new kind of collaborative music will be ‘mechanical’. Behaving mechanically, he points out, isn’t the exclusive preserve of machines. ‘People start behaving like machines when they get stuck in particular ways of doing things. My hope is that the AI might actually stop us behaving like machines, by showing us new areas to explore.’

Du Sautoy is further encouraged by how those much-hyped ‘AIs’ actually work. And let’s be clear: they do not expand our horizons by thinking better than we do. Nor, in fact, do they think at all. They churn.

‘One of the troubles with machine-learning is that you need huge swaths of data,’ he explains. ‘Machine image recognition is hugely impressive, because there are a lot of images on the internet to learn from. The digital environment is full of cats; consequently, machines have got really good at spotting cats. So one thing which might protect great art is the paucity of data. Thanks to his interminable chorales, Bach provides a toe-hold for machine imitators. But there may simply not be enough Bartok or Brahms or Beethoven for them to learn on.’

There is, of course, the possibility that one day the machines will start learning from each other. Channelling Marshall McLuhan, the curator Hans Ulrich Obrist has argued that art is an early-warning system for the moment true machine consciousness arises (if it ever does arise).

Du Sautoy agrees. ‘I think it will be in the world of art, rather than in the world of technology, that we’ll see machines first express themselves in a way that is original and interesting,’ he says. ‘When a machine acquires an internal world, it’ll have something to say for itself. Then music is going to be a very important way for us to understand what’s going on in there.’

In the realm of mind games

By the end of the show, I was left less impressed by artificial intelligence and more depressed that it had reduced my human worth to base matter. Had it, though? Or had it simply made me aware of how much I wanted to be base matter, shaped into being by something greater than myself? I was reminded of something that Benjamin Bratton, author of the cyber-bible The Stack, said in a recent lecture: “We seem only to be able to approach AI theologically.”

Visiting AI: More Than Human at London’s Barbican Centre for the Financial Times, 15 May 2019.

Ushering in the End Times at London’s Barbican Hall

LCO_Barbican_311018_244

Mark Allan / Barbican

Listening to the London Contemporary Orchestra for New Scientist, 1 November 2018

On All Hallow’s Eve this year, at London’s Barbican Hall, the London Contemporary Orchestra, under the baton of their co-artistic director Robert Ames, managed with two symphonic pieces to drown the world and set it ablaze in the space of a single evening.

Giacinto Scelsi’s portentously titled Uaxuctum: The legend of the Maya City, destroyed by the Maya people themselves for religious reasons, evoked the mysterious and violent collapse of that once thriving civilisation; the second piece of the evening, composer and climate activist John Luther Adams’s Become Ocean, looked to the future, the rise of the world’s oceans, and good riddance to the lot of us.

Lost Worlds was a typical piece of LCO programming: not content with presenting two very beautiful but undeniably challenging long-ish works, the orchestra had elected to play behind a translucent screen onto which were projected the digital meanderings of an artistically trained neural net. Twists of entoptic colour twisted and cavorted around the half-seen musicians while a well-place spotlight, directly over Ames’s head, sent the conductor’s gestures sprawling across the screen, as though ink were being dashed over all those pretty digitally generated splotches of colour.

Everything, on paper, pointed to an evening that was trying far too hard to be avant garde. In the execution, however, the occasion was a triumph.

The idea of matching colours to sounds is not new. The painter Wassily Kandinsky struggled for years to fuse sound and image and ended up inventing abstract painting, more or less as a by-product. The composer Alexander Scriabin was so desperate to establish his reputation as the founder of a new art of colour-music, he plagiarised other people’s synaesthetic experiences in his writings and invented a clavier à lumières (“keyboard with lights”) for use in his work Prometheus: Poem of Fire. “It is not likely that Scriabin’s experiment will be repeated by other composers,” wrote a reviewer for The Nation after its premiere in New York in 1915: “moving-picture shows offer much better opportunities.” (Walt Disney proved The Nation right: Fantasia was released in 1937.)

Now, as 2018 draws to a close, artificial intelligence is being hurled at the problem. For this occasion the London-based theatrical production company Universal Assembly Unit had got hold of a recursive neural net engineered by Artrendex, a company that uses artificial intelligence to research and predict the art market. According to the concert’s programme note, it took several months to train Artrendex’s algorithm on videos of floods and fires, teaching it the aesthetics of these phenomena so that, come the evening of the performance, it would construct organic imagery in response to the music.

LCO_Barbican_311018_156

Mark Allan / Barbican

While never obscuring the orchestra, the light show was dramatic and powerful, sometimes evoking (for those who enjoy their Andrei Tarkovsky) the blurriness of the clouds swamping the ocean planet Solaris in the movie of that name; then at other moments weaving and flickering, not so much like flames, but more like the speeded-up footage from some microbial experiment. Maybe I’ve worked at New Scientist too long, but I got the distinct and discomforting impression that I was looking, not at some dreamy visual evocation of a musical mood, but at the the responses of single-celled life to desperate changes in their tiny environment.

As for the music – which was, after all, the main draw for this evening – it is fair to say that Scelsi’s Uaxuctum would not be everyone’s cup of tea. For a quick steer, recall the waily bits from 2001: A Space Odyssey. That music was by the Hungarian composer György Ligeti, who was born about two decades after Scelsi, and was — both musically and personally — a lot less weird. Scelsi was a Parisian dandy who spent years in a mental institution playing one piano note again and again and Uaxuctum, composed in 1966, was such an incomprehensibly weird and difficult proposition, it didn’t get any performance at all for 21 years, and no UK performance at all before this one.

John Luther Adams’s Become Ocean (2013) is an easier (and more often performed) composition – The New Yorkermusic critic Alex Ross called it “the loveliest apocalypse in musical history”. This evening its welling sonorities brought hearts into mouths: rarely has mounting anxiety come wrapped in so beautiful a package.

So I hope it takes nothing away from the LCO’s brave and accomplished playing to say that the visual component was the evening’s greatest triumph. The dream of “colour music” has ended in bathos and silliness for so many brilliant and ambitious musicians. Now, with the judicious application of some basic neural networking, we may at last be on the brink of fusing tone and colour into an art that’s genuinely new, and undeniably beautiful.

Pierre Huyghe: Digital canvases and mind-reading machines

Visiting UUmwelt, Pierre Huyghe’s show at London’s Serpentine Gallery, for the Financial Times, 4 October 2018

On paper, Pierre Huyghe’s new exhibition at the Serpentine Gallery in London is a rather Spartan effort. Gone are the fictional characters, the films, the drawings; the collaborative manga flim-flam of No Ghost Just a Shell; the nested, we’re not-in-Kansas-any-more fictions, meta-fictions and crypto-documentaries of Streamside Day Follies. In place of Huyghe’s usual stage blarney come five large LED screens. Each displays a picture that, as we watch, shivers through countless mutations, teetering between snapshot clarity and monumental abstraction. One display is meaty; another, vaguely nautical. A third occupies a discomforting interzone between elephant and milk bottle.

Huyghe has not abandoned all his old habits. There are smells (suggesting animal and machine worlds), sounds (derived from brain-scan data, but which sound oddly domestic: was that not a knife-drawer being tidied?) and a great many flies. Their random movements cause the five monumental screens to pause and stutter, and this is a canny move, because without that  arbitrary grammar, Huyghe’s barrage of visual transformations would overwhelm us, rather than excite us. There is, in short, more going on here than meets the eye. But that, of course, is true of everywhere: the show’s title nods to the notion of “Umwelt” coined by the zoologist Jacob von Uexküll in 1909, when he proposed that the significant world of an animal was the sum of  things to which it responds, the rest going by virtually unnoticed. Huyghe’s speculations about machine intelligence are bringing this story up to date.

That UUmwelt turns out to be a show of great beauty as well; that the gallery-goer emerges from this most abstruse of high-tech shows with a re-invigorated appetite for the arch-traditional business of putting paint on canvas: that the gallery-goer does all the work, yet leaves feeling exhilarated, not exploited — all this is going to require some explanation.

To begin at the beginning, then: Yukiyasu Kamitani , who works at Kyoto University in Japan, made headlines in 2012 when he fed the data from fMRI brain scans of sleeping subjects into neural networks. These computer systems eventually succeeded in capturing shadowy images of his volunteers’ dreams. Since then his lab has been teaching computers to see inside people’s heads. It’s not there yet, but there are interesting blossoms to be plucked along the way.

UUmwelt is one of these blossoms. A recursive neural net has been shown about a million pictures, alongside accompanying fMRI data gathered from a human observer. Next, the neural net has been handed some raw fMRI data, and told to recreate the picture the volunteer was looking at.

Huyghe has turned the ensuing, abstruse struggles of the Kamitani Lab’s unthinking neural net into an exhibition quite as dramatic as anything he has ever made. Only, this time, the theatrics are taking place almost entirely in our own heads. What are we looking at here? A bottle. No, an elephant, no, a Francis Bacon screaming pig, goose, skyscraper, mixer tap, steam train mole dog bat’s wing…

The closer we look, the more engaged we become, the less we are able to describe what we are seeing. (This is literally true, in fact, since visual recognition works just that little bit faster than linguistic processing.) So, as we watch these digital canvases, we are drawn into dreamlike, timeless lucidity: a state of concentration without conscious effort that sports psychologists like to call “flow”. (How the Serpentine will ever clear the gallery at the end of the day I have no idea: I for one was transfixed.)

UUmwelt, far from being a show about how machines will make artists redundant, turns out to be a machine for teaching the rest of us how to read and truly appreciate the things artists make. It exercises and strengthens that bit of us that looks beyond the normative content of images and tries to make sense of them through the study of volume, colour, light, line, and texture. Students of Mondrian, Duffy and Bacon, in particular, will lap up this show.

Remember those science-fictional devices and medicines that provide hits of concentrated education? Quantum physics in one injection! Civics in a pill! I think Huyghe may have come closer than anyone to making this silly dream a solid and compelling reality. His machines are teaching us how to read pictures, and they’re doing a good job of it, too.