“Chuck one over here, Candy Man!”

 

Watching Ad Astra for New Scientist, 18 September 2019

It is 2033. Astronaut Roy McBride (Brad Pitt) is told that his father Clifford, the decorated space explorer, may still be alive, decades after he and the crew of his last mission fell silent in orbit around Neptune.

Clifford’s Lima mission was sent to the outer edges of the heliosphere – the region of the sun’s gravitational influence – the better to scan the galaxy’s exoplanets for intelligent life. Now the Lima’s station’s antimatter generator is triggering electrical storms on distant Earth, and all life in the solar system is threatened.

McBride sets off on a secret mission to Mars. Once there, he is handed a microphone. He reads out a message to his dad. When he finishes speaking, he and the sound engineers pause, as if awaiting an instant reply from Clifford, the message’s intended recipient, somewhere in orbit around Neptune. What?

Eventually a reply is received (ten days later, presumably, given that Mars and Neptune are on average more than four billion kilometres apart). No-one wants to tell McBride what his dad said except the woman responsible for the Mars base (the wonderful Ruth Negga, looking troubled here, as well she might). The truths she shares about Roy’s father convince the audience, if not Roy himself, that the authorities are quite right to fear Clifford, quite right to seek a way to neutralise him, and quite right in their efforts to park his unwitting son well out of the way.

But Roy, at great risk to himself, and with actions that will cost several lives, is determined on a course for Neptune, and a meeting with his dad.

Ad Astra is a psychodrama about solipsistic fathers and abandoned sons, conducted in large part through monologues and close-ups of Brad Pitt’s face. And this is as well, since Pitt’s performance is easily the most coherent and thrilling element in a film that is neither.

Not, to be fair, that Ad Astra ever aspired to be exciting in any straightforward way. Pirates and space monkeys aside (yes, you read that right) Ad Astra is a serious, slow-burn piece about our desire to explore the world, and our desire to make meaning and connection, and how these contrary imperatives tear us apart in the vastness of the cosmic vacuum.

It ought to have worked.

The fact that it’s serious should have worked: four out of five of writer-director James Gray’s previous films were nominated for Cannes Film Festival’s Palme d’Or. Ad Astra itself was inspired by a Pulitzer Prize-winning collection of poems by Tracy K. Smith, all about gazing up at the stars and grieving for her father.

The film’s visuals and sound design should have worked. It draws inspiration for its dizzying opening sequence from the well-documented space-parachuting adventures of Felix Baumgartner in 2012, adopts elsewhere the visual style and sound design of Alfonso Cuarón’s 2013 hit film Gravity, and, when we get to Mars, tips its hat to the massy, reinforced concrete interiors of Denis Villeneuve’s 2017 Blade Runner 2049. For all that, it still feels original: a fully realised world.

The incidental details ought to have worked. There’s much going on in this film to suggest that everyone is quietly, desperately attempting to stabilise their mood, so as not to fly off the handle in the cramped, dull, lifeless interiors beyond Earth. The whole off-world population is seen casually narcotising itself: “Chuck one over here, Candy Man!” Psychological evaluations are a near-daily routine for anyone whose routine brings them anywhere near an airlock, and these automated examinations (shades of Blade Runner 2049 again) seem to be welcomed, as one imagines Catholic confession would be welcomed by a hard-pressed believer.

Even the script, though a mess, might have worked. Pitt turns the dullest lines into understated character portraits with a well-judged pause and the tremor of one highly trained facial muscle. Few other cast members get a word in edgewise.

What sends Ad Astra spinning into the void is its voiceover. Grey is a proven writer and director, and he’s reduced Ad Astra’s plot down to seven-or-so strange, surreal, irreducible scenes, much in the manner of his cinematic hero Stanley Kubrick. Like Kubrick, he’s kept dialogue to the barest minimum. Like Kubrick, he’s not afraid of letting a good lead actor dominate the screen. And then someone – can it really have been Gray himself? – had the bright idea to vitiate all that good work by sticking Roy McBride’s internal monologue over every plot point, like a string of Elastoplasts.

Consequently, the audience are repeatedly kicked out of the state of enchantment they need to inhabit if they’re going to see past the plot holes to the movie’s melancholy heart.

The devil of this film is that it fails so badly, even as everyone is working so conspicuously hard to make a masterpiece. “Why go on?” Roy asks in voiceover, five minutes before the credits roll. “Why keep trying?”

Why indeed?

Human/nature

Was the climate crisis inevitable? For the Financial Times, 13 September 2019

Everything living is dying out. A 2014 analysis of 3,000 species, confirmed by recent studies, reveals that half of all wild animals have been lost since 1970. The Amazon is burning, as is the Arctic.

An excess of carbon dioxide in the atmosphere, meanwhile, has not only played havoc with the climate but also reduced the nutrient value of plants by about 30 per cent since the 1950s.

And we’re running out of soil. In the US, it’s eroding 10 times faster than it’s being replaced. In China and India, the erosion is more than three times as bad. Five years ago, the UN Food and Agriculture Organization claimed we had fewer than 60 years of harvests left if soil degradation continued at its current rate.

Why have we waited until we are one generation away from Armageddon before taking such problems seriously?

A few suggestions: first, the environment is far too complicated to talk about — at least on the tangled information networks we have constructed for ourselves.

Second, we’re lazy and we’re greedy, like every other living thing on the planet — though because most of us co-operate with each other, we are arguably the least greedy and least lazy animals around.

Where we fall down is in our tendency to freeload on our future selves. “Discounting the future” is one of our worst habits, and one that in large part explains why we leave even important, life-and-death actions to the last minute.

Here’s a third reason why we’re dealing so late with climate change. It’s the weirdest, and maybe the most important of the three. It’s that we know we are going to die.

Thinking about environmental threats reminds us of our own mortality, and death is a prospect so appalling we’ll do anything — anything — to stop thinking about it.

“I used to wonder how people could stand the really demonic activity of working behind those hellish ranges in hotel kitchens, the frantic whirl of waiting on a dozen tables at one time,” wrote Ernest Becker in his Pulitzer-winning meditation The Denial of Death in 1973.

“The answer is so simple that it eludes us: the craziness of these activities is exactly that of the human condition. They are ‘right’ for us because the alternative is natural desperation.”

Psychologists inspired by Becker have run experiments to suggest it’s the terror of death that motivates consciousness and all its accomplishments. “It raised the pyramids in Egypt and razed the Twin Towers in Manhattan,” is the memorable judgment of the authors of 2015’s best-selling book The Worm at the Core.

This hardly sounds like good news. But it may offer us, if not a solution to the current crisis, at least a better, healthier and more positive way of approaching it.

No coping mechanism is infallible. We may be profoundly unwilling to contemplate our mortality, and to face up to the slow-burn, long-term threats to our existence, but that anxiety can’t ultimately be denied. Our response is to bundle it into catastrophes — in effect to construe the world in terms of crises to make everyday existence bearable.

Even positive visions of the future assume the necessity for cataclysmic change: why else do we fetishise “disruption”? “The concept of progress is to be grounded in the idea of the catastrophe,” as the German philosopher Walter Benjamin put it.

Yes, we could have addressed climate change much more easily in the 1970s, when the crisis wasn’t so urgent. But the fact is, we’re built for urgent action. A flood. A drought. A famine. We know where we are in a catastrophe. It may be that our best is yet to come.

Will our best be enough? Will we move quickly and coherently enough to save ourselves from the catastrophes attendant on massive climate change? That’s a hard question to answer.

The earliest serious attempts at modelling human futures were horrific. One commentator summed up Thomas Malthus’s famous 1798 Essay on the Principle of Population as “150 pages of excruciatingly detailed travellers’ accounts and histories . . . of bestial life, sickness, weakness, poor food, lack of ability to care for young, scant resources, famine, infanticide, war, massacre, plunder, slavery, cold, hunger, disease, epidemics, plague, and abortion.”

Malthus, an English cleric driven up the wall by positive Enlightenment thinkers such as Godwin and Condorcet, set out to remind everybody that people were animals. Like animals, their populations were bound eventually to exceed the available food supply. It didn’t matter that they dressed nicely or wrote poetry. If they overbred, they would starve.

We’ve been eluding this Malthusian trap for centuries, by bolting together one cultural innovation after another. No bread? Grow soy. No fish? Breed insects. Eventually, on a finite planet, Malthus will have his revenge — but when?

The energy thinker Vaclav Smil’s forthcoming book Growth studies the growth patterns of everything from microorganisms to mammals to entire civilisations. But the Czech-Canadian academic is chary about breaking anything as complicated as humanity down to a single metric.

“In the mid-1980s,” he recalls, “people used to ask me, when would the Chinese environment finally collapse? I was writing about this topic early on, and the point is, it was never going to collapse. Or it’s constantly collapsing, and they’re constantly fixing parts of it.”

Every major city in China has clean water and improving air quality, according to Smil. A few years ago people were choking on the smog.

“It’s the same thing with the planet,” he says. “Thirty years ago in Europe, the number-one problem wasn’t global warming, it was acid rain. Nobody mentions acid rain today because we desulphurised our coal-fired power plants and supplanted coal with natural gas. The world’s getting better and worse at the same time.”

Smil blames the cult of economics for the way we’ve been sitting on our hands while the planet heats up. The fundamental problem is that economics has become so divorced from fundamental reality,” he says.

“We have to eat, we have to put on a shirt and shoes, our whole lives are governed by the laws that govern the flows of energy and materials. In economics, though, everything is reduced to money, which is only a very imperfect measure of those flows. Until economics returns to the physical rules of human existence, we’ll always be floating in the sky and totally detached from reality.”

Nevertheless, Smil thinks we’d be better off planning for a good life in the here and now, and this entails pulling back from our current levels of consumption.

“But we’re not that stupid,” he says, “and we may have this taken care of by people’s own decision making. As they get richer, people find that children are very expensive, and children have been disappearing everywhere. There is not a single European country now in which fertility will be above replacement level. And even India is now close to the replacement rate of 2.1 children per family.”

So are we out of the tunnel, or at the end of the line? The brutal truth is, we’ll probably never know. We’re not equipped to know. We’re too anxious, too terrified, too greedy for the sort of certainty a complex environment is simply not going to provide.

Now that we’ve spotted this catastrophe looming over our heads, it’s with us for good. No one’s ever going to be able to say that it’s truly gone away. As Benjamin tersely concluded, “That things ‘just go on’ is the catastrophe.”

All the ghosts in the machine

Reading All the Ghosts in the Machine: Illusions of immortality in the digital age by Elaine Kasket for New Scientist, 22 June 2019

Moving first-hand interviews and unnervingly honest recollections weave through psychologist Elaine Kasket’s first mainstream book, All the Ghosts in the Machine, an anatomy of mourning in the digital age. Unravelling that architecture turns up two distinct but complementary projects.

The first offers some support and practical guidance for people (and especially family members) who are blindsided by the practical and legal absurdities generated when people die in the flesh, while leaving their digital selves very much alive.

For some, the persistence of posthumous data, on Facebook, Instagram or some other corner of the social media landscape, is a source of “inestimable comfort”. For others, it brings “wracking emotional pain”. In neither case is it clear what actions are required, either to preserve, remove or manage that data. As a result, survivors usually oversee the profiles of the dead themselves – always assuming, of course, that they know their passwords. “In an effort to keep the profile ‘alive’ and to stay connected to their dead loved one,” Kasket writes, “a bereaved individual may essentially end up impersonating them.”

It used to be the family who had privileged access to the dead, to their personal effects, writings and photographs. Families are, as a consequence, disproportionately affected by the persistent failure of digital companies to distinguish between the dead and the living.

Who has control over a dead person’s legacy? What unspoken needs are being trammelled when their treasured photographs evaporate or, conversely, when their salacious post-divorce Tinder messages are disgorged? Can an individual’s digital legacy even be recognised for what it is in a medium that can’t distinguish between life and death?

Kasket’s other project is to explore this digital uncanny from a psychoanalytical perspective. Otherwise admirable 19th-century ideals of progress, hygiene and personal improvement have conned us into imagining that mourning is a more or less understood process of “letting go”. Kasket’s account of how this idea gained currency is a finely crafted comedy of intellectual errors.

In fact, grief doesn’t come in stages, and our relationships with the dead last far longer than we like to imagine. All the Ghosts in the Machine opens with an account of the author’s attempt to rehabilitate her grandmother’s bitchy reputation by posting her love letters on Instagram.

“I took a private correspondence that was not intended for me and transformed it from its original functions. I wanted it to challenge others’ ideas, and to affect their emotions… Ladies and gentlemen of today, I present to you the deep love my grandparents held for one another in 1945, ‘True romance’, heart emoticon.”

Eventually, Kasket realised that the version of her grandmother her post had created was no more truthful than the version that had existed before. And by then, of course, it was far too late.

The digital persistence of the dead is probably a good thing in these dissociated times. A culture of continuing bonds with the dead is much to be preferred over one in which we are all expected to “get over it”. But, as Kasket observes, there is much work to do, for “the digital age has made continuing bonds easier and harder all at the same time.”

Asking for it

Reading The Metric Society: On the Quantification of the Social by Steffen Mau (Polity Press) for the Times Literary Supplement, 30 April 2019 

Imagine Steffen Mau, a macrosociologist (he plays with numbers) at Humboldt University of Berlin, writing a book about information technology’s invasion of the social space. The very tools he uses are constantly interrupting him. His bibliographic software wants him to assign a star rating to every PDF he downloads. A paper-sharing site exhorts him repeatedly to improve his citation score (rather than his knowledge). In a manner that would be funny, were his underlying point not so serious, Mau records how his tools keep getting in the way of his job.

Why does Mau use these tools at all? Is he too good for a typewriter? Of course he is: the whole history of civilisation is the story of us getting as much information as possible out of our heads and onto other media. It’s why, nigh-on 5000 years ago, the Sumerians dreamt up the abacus. Thinking is expensive. How much easier to stop thinking, and rely on data records instead!

The Metric Society, is not a story of errors made, or of wrong paths taken. This is a story, superbly reduced to the chill essentials of an executive summary, of how human society is getting exactly what it’s always been asking for. The last couple of years have seen more than 100 US cities pledge to use evidence and data to improve their decision-making. In the UK, “What Works Centres”, first conceived in the 1990s, are now responsible for billions in funding. The acronyms grow more bellicose, the more obscure they become. In the UK, the Alliance for Useful Evidence (with funding from ESRC, Big Lottery and Nesta) champions the use of evidence in social policy and practice.

Mau describes the emergence of a society trapped in “data-driven perpetual stock-taking”, in which the new Juggernaut of auditability lays waste to creativity, production, and even simple efficiency. “The magic attraction of numbers and comparisons is simply irresistible,” Mau writes.

It’s understandable. Our first great system of digital abstraction, money, enabled a more efficient and less locally bound exchange of good and services, and introduced a certain level of rational competition into the world of work.

But look where money has led us! Capital is not the point here. Neither is capitalism. The point is our relationship with information. Amazon’s algorithms are sucking all the localism out of the retail system, to the point where whole high streets have vanished — and entire communities with them. Amazon is in part powered by the fatuous metricisation of social variety through systems of scores, rankings, likes, stars and grades, which are (not coincidentally) the methods by which social media structures — from clownish Twitter to China’s Orwellian Social Credit System — turn qualitative differences into quantitative inequalities.

Mau leaves us thoroughly in the lurch. He’s a diagnostician, not a snake-oil salesman, and his bedside manner is distinctly chilly. Dazzled by data, which have relieved us of the need to dream and imagine, we fight for space on the foothills of known territory. The peaks our imaginations might have trod — as a society, and as a species — tower above us, ignored.

A series of apparently impossible events

Exploring Smoke and Mirrors at Wellcome Collection for New Scientist, 1 May 2019

ACCORDING to John Nevil Maskelyne, “a bad conjurer will make a good medium any day”. He meant that, as a stage magician in 19th-century London, he had to produce successful effects night after night, while rivals who claimed their illusions were powered by the spirit world could simply blame a bad set on “unhelpful spirits”, or even on the audience’s own scepticism.

A gaffe-ridden performance in the UK by one set of spiritualists, the US Davenport Brothers, drove Maskelyne to invent his own act. With his friend, the cabinet maker George Alfred Cooke, he created an “anti-spiritualist” entertainment, at once replicating and debunking the spiritualist movement’s stock-in-trade effects.

Matthew Tompkins teases out the historical implications of Maskelyne’s story in The Spectacle of Illusion: Magic, the paranormal and the complicity of the mind (Thames & Hudson). It is a lavishly illustrated history to accompany Smoke and Mirrors, a new and intriguing exhibition at the Wellcome Collection in London.

Historical accident was partly responsible. In 1895, Guglielmo Marconi sent long-wave radio signals over a distance of a couple of kilometres, and, for decades after, hardly a year passed in which some researcher didn’t announce a new type of invisible ray. The world turned out to have aspects hidden from unaided human perception. Was it so unreasonable of people to speculate about what, or who, might lurk in those hidden corners of reality? Were they so gullible, reeling as they were from the mass killings of the first world war, to populate these invisible realms with their dead?

In 1924, the magazine Scientific American offered $2500 to any medium who could demonstrate their powers under scientific controls. The medium Mina “Margery” Crandon decided to try her hand, but she reckoned without the efforts of one Harry “Handcuff” Houdini, who eventually exposed her as a fraud.

Yet spiritualism persisted, shading off into parapsychology, quantum speculation and any number of cults. Understanding why is more the purview of a psychologist such as Gustav Kuhn, who, as well as being a major contributor to the show, offers insight into magic and magical belief in his own new book, Experiencing the Impossible (MIT Press).

Kuhn, a member of the Magic Circle, finds Maskelyne’s “anti-spiritualist” form of stage magic alive in the hands of illusionist Derren Brown. He suggests that Brown is more of a traditional magician than he lets on, dismissing the occult while he endorses mysterious psychological phenomena, mostly to do with “subconscious priming”, that, at root, are non-scientific.

Kuhn defines magic as “the experience of wonder that results from perceiving an apparently impossible event”. Definitions of what is impossible differ, and different illusions work for different people. You can even design it for animals, as a torrent of YouTube videos, based largely on Finnish magician Jose Ahonen’s “Magic for Dogs”, attest.

Tricking dogs is one thing, but why do our minds fall for magic? It was the 18th-century Scottish Enlightenment philosopher, David Hume, who argued that there is no metaphysical glue binding events, and that we only ever infer causal relationships, be they real or illusory.

Twinned with our susceptibility to wrongly infer relationships between events in the world is our ability to fool ourselves at an even deeper level. Numerous studies, including one by researcher and former magician Jay Olson and clinician Amir Raz which sits at the exit to the Wellcome show, conclude that our feeling of free will may be an essential trick of the mind.

Inferring connections makes us confident in ourselves and our abilities, and it is this confidence, this necessary delusion about the brilliance of our cognitive abilities, that lets us function… and be tricked. Even after reading both books, I defy you to see through the illusions and wonders in store at the exhibition.

Choose-your-own adventure

Reading The Importance of Small Decisions by Michael O’Brien, R. Alexander Bentley and William Brock for New Scientist, 13 April 2019

What if you could map all kinds of human decision-making and use it to chart society’s evolution?

This is what academics Michael O’Brien, Alexander Bentley and William Brock try to do in The Importance of Small Decisions. It is an attempt to expand on a 2014 paper, “Mapping collective behavior in the big-data era”, that they wrote in Behavioral and Brain Sciences . While contriving to be somehow both too short and rambling, it bites off more than it can chew, nearly chokes to death on the ins and outs of group selection, and coughs up its best ideas in the last 40 pages.

Draw a graph. The horizontal axis maps decisions according to how socially influenced they are. The vertical axis tells you how clear the costs and pay-offs are for each decision. Rational choices sit in the north-western quadrant of the map. To the north-east, bearded capuchins teach each other how to break into palm nuts in a charming example of social learning (pictured). Twitter storms generated by fake news swirl about the south-east.

The more choices you face, the greater the cognitive load. The authors cite economist Eric Beinhocker, who in The Origin of Wealth calculated that human choices had multiplied a hundred million-fold in the past 10,000 years. Small and insignificant decisions now consume us.

Worse, costs and pay-offs are increasingly hidden in an ocean of informational white noise, so that it is easier to follow a trend than find an expert. “Why worry about the underlying causes of global warming when we can see what tens of millions of our closest friends think?” ask the authors, building to a fine, satirical climax.

In an effort to communicate widely, the authors have, I think, left out a few too many details from their original paper. And a mid-period novel by Philip K. Dick would paint a more visceral picture of a world created by too much information. Still, there is much fun to be had reading the garrulous banter of these three extremely smart academics.

Come on, Baggy, get with the beat!

Reading The Evolving Animal Orchestra: In search of what makes us musical by Henkjan Honing for New Scientist, 6 April 2019

The perception, if not the enjoyment, of musical cadences and of rhythm,” wrote Darwin in his 1871 book The Descent of Man, “is probably common to all animals.”

Henkjan Honing has tested this eminently reasonable idea, and in his book, The Evolving Animal Orchestra, he reports back. He details his disappointment, frustration and downright failure with such wit, humility and a love of the chase that any young person reading it will surely want to run away to become a cognitive scientist.

No culture has yet been found that doesn’t have music, and all music shares certain universal characteristics: melodies composed of seven or fewer discrete pitches; a regular beat; a limited sequence of rhythmic patterns. All this would suggest a biological basis for musicality.

A bird flies with regular beats of its wings. Animals walk with a particular rhythm. So you might expect beat perception to be present in everything that doesn’t want to falter when moving. But it isn’t. Honing describes experiments that demonstrate conclusively that we are the only primates with a sense of rhythm, possibly deriving from advanced beat perception.

Only strongly social animals, he writes, from songbirds and parrots to elephants and humans, have beat perception. What if musicality was acquired by all prosocial species through a process of convergent evolution? Like some other cognitive scientists, Honing now wonders whether language might derive from music, in a similar way to how reading uses much older neural structures that recognise contrast and sharp corners.

Honing must now test this exciting hypothesis. And if The Evolving Animal Orchestra is how he responds to disappointment, I can’t wait to see what he makes of success.

And so we wait

Thinking about Delayed Response by Jason Farman (Yale) for the Telegraph, 6 February 2019.

In producer Jon Favreau’s career-making 1996 comedy film Swingers, Favreau himself plays Mike, a young man in love with love, and at war with the answerphones of the world.

“Hi,” says one young woman’s machine, “this is Nikki. Leave a message,” prompting Mike to work, flub after flub, through an entire, entirely fictitious, relationship with the absent Nikki.

“This just isn’t working out,” he sighs, on about his twentieth attempt to leave a message that’s neither creepy nor incoherent.” I — I think you’re great, but, uh, I — I… Maybe we should just take some time off from each other. It’s not you, it’s me. It’s what I’m going through.”

There are a couple of lessons in this scene, and once they’re learned, there’ll be no pressing need for you to read Jason Farman’s Delayed Response. (I think you’d enjoy reading him, quite a bit, but, in the spirit of this project, let those reasons wait till the end.)

First lesson of two: “non-verbal communication never stops; non-verbal cues are always being produced whether we want them to be or not.” Those in the know may recognise here Farman’s salute here to Edward T. Hall’s book The Silent Language (1980), for which Delayed Response is a useful foil. But the point — that any delay between transmission and reception is part of the message — is no mere intellectual nicety. Anyone who has had a love affair degenerate into an exchange of ever more flippant WhatsApp messages; or has waited for a prospective employer to get back to them about a job application, knows that silent time carries meaning.

Second lesson: delay can be used to manifest power. In Swingers, Mike crashes into what an elusive novelist friend of mine dubs, with gleeful malevolence, “the power of absence,” which is more or less the same power my teenage daughter wields when she “ghosts” some boy. In the words of the French sociologist Pierre Bourdieu, “Waiting is one of the privileged ways of experiencing the effect of power, and the link between time and power.” We’re none of us immune; we’re all in thrall to what Farman calls the “waiting economy”, and as our civics crumble (don’t pretend you haven’t noticed) the hucksters driving that economy get more and more brazen. (Consider, as an example, the growing discrepancy in UK delivery times between public and private postal services.)

Delays carry meanings. We cannot control them with any finesse; but we can use them as blunt weapons on each other.

What’s left for Farman to say?

Farman’s account of wait times is certainly exhaustive, running the full gamut of history, from contemporary Japanese smartphone messaging apps to Aboriginal message sticks that were being employed up to 50,000 years ago. (To give you some idea how venerable that communication system is, consider that papyrus dates from around 2900 BC.) He spans on-line wait times both so short as to be barely perceptible, and delays so long that they may be used to calculate the distance between planets. His examples are sometimes otherworldly (literally so in the case of the New Horizons mission to Pluto), sometimes unnervingly prosaic: He recounts the Battle of Fredericksburg in the American Civil War as a piling up of familiar and ordinary delays, conjuring up a picture of war-as-bureaucracy that is truly mortifying.

Farman acknowledges how much more quickly we send and receive messages these days — but his is no paean to technological progress. The dismal fact is: the instantaneous provision of information degrades our ability to interpret it. As long ago as 1966 the neurobiologist James L McGaugh reported that as the time increases between learning and testing, memory retention actually improves. And yet the purveyors of new media continue to equate speed with wisdom, promising that ever-better worlds will emerge from ever-more-efficient media. Facebook’s Mark Zuckerberg took this to an extreme typical of him in April 2017 when he announced that “We’re building further out beyond augmented reality, and that includes work around direct brain interfaces that one day will let you communicate using only your mind, although that stuff is pretty far out.”

This kind of hucksterism is harmless in itself, but it doesn’t come out of nothing. The thing we should be really afraid of is the creeping bureaucratisation of human experience. I remember, three years before Zuckerberg slugged back his own Kool-Aid, I sat listening to UCL neuroscientist Nilli Lavie lecturing about attention. Lavie was clearly a person of good will and good sense, but what exactly did she mean by her claim that wandering attention loses the US economy around two billion dollars a year? Were our minds to be perfectly focused, all year round, would that usher in some sort of actuarial New Jerusalem? Or would it merely extinguish all dreaming? Without a space for minds to wander in, where would a new idea – any new idea – actually come from?

This, of course, is the political flim-flam implicit in crisis thinking. So long as we are occupied with urgent problems, we are unable to articulate nuanced and far-reaching political ideas. “Waiting, ultimately, is essential for imagining that which does not yet exist and innovating on the knowledge we encounter,” Farman writes, to which I’m inclined to add the obvious point: Progressives are shrill and failing because their chosen media — Twitter and the like — deprive them of any register other than crisis-triggered outrage.

We may dream up a dystopia in which the populus is narcotised into bovine contentment by the instantaneous supply of undigested information, but as Farman makes clear, this isn’t going to happen. The anxiety generated by delay doesn’t disappear with quicker response times, it simply gets redistributed and reshaped. People under 34 years of age check their phones an average of 150 times a day, a burden entirely alien to soldiers waiting for Death and the postman at Fredericksburg. Farman writes: “Though the mythologies of the digital age continue to argue that we are eliminating waiting from daily life, we are actually putting it right at the centre of how we connect with one another.”

This has a major, though rarely articulated consequence for us: that anxiety balloons to fill the vacuum left by a vanished emotion: one we once considered pleasurable and positive. I refer, of course, to anticipation. Anticipation is no longer something we relish. This is because, in our world of immediate satisfactions, we’re simply not getting enough exposure to it. Waiting has ceased to be the way we measure projected pleasure. Now it’s merely an index of our frustration.

Farman is very good on this subject, and this is why Delayed Response is worth reading. (There: I told you we’d get around to this.) The book’s longueurs, and Farman’s persnickety academical style, pale beside his main point, very well expressed, that “the meaning of life isn’t deferred until that thing we hope for arrives; instead, in the moment of waiting, meaning is located in our ability to recognise the ways that such hopes define us.”

Making abstract life

Talking to the design engineer Yamanaka Shunji for New Scientist, 23 January 2019

Five years ago, desktop 3D printers were poised to change the world. A couple of things got in the way. The print resolution wasn’t very good. Who wants to drink from a tessellated cup?

More important, it turned out that none of us could design our way out of a wet paper bag.

Japanese designer Yamanaka Shunji calls forth one-piece walking machines from vinyl-powder printers the way the virtuoso Phyllis Chen conjures concert programmes from toy pianos. There’s so much evident genius at work, you marvel that either has time for such silliness.

There’s method here, of course: Yamanaka’s X-Design programme at Keio University turns out objects bigger than the drums in which they’re sintered, by printing them in folded form. It’s a technique lifted from space-station design, though starry-eyed Western journalists, obsessed with Japanese design, tend to reach for origami metaphors.

Yamanaka’s international touring show, which is stopping off at Japan House in London until mid-March, knows which cultural buttons to press. The tables on which his machine prototypes are displayed are steel sheets, rolled to a curve and strung under tension between floor and ceiling, so visitors find themselves walking among what appear to be unfolded paper scrolls. If anything can seduce you into buying a £100 sake cup when you exit the gift shop, it’s this elegant, transfixing show.

“We often make robots for their own sake,” says Yamanaka, blithely, “but usefulness is also important for me. I’m always switching between these two ways of thinking as I work on a design.”
The beauty of his work is evident from the first. Its purpose, and its significance, take a little unpacking.

Rami, for example: it’s a below-the-knee running prosthesis developed for the athlete Takakura Saki, who represented Japan during the 2012 Paralympics. Working from right to left, one observes how a rather clunky running blade mutated into a generative, organic dream of a limb, before being reined back into a new and practical form. The engineering is rigorous, but the inspiration was aesthetic: “We hoped the harmony between human and object could be improved by re-designing the thing to be more physically attractive.”

Think about that a second. It’s an odd thing to say. It suggests that an artistic judgement can spur on and inform an engineering advance. And so, it does, in Yamanaka’s practice, again, and again.

Yamanaka, is an engineer who spent much of his time at university drawing manga, and cut his teeth on car design at Nissan. He wants to make something clear, though: “Engineering and art don’t flow into each other. The methodologies of art and science are very different, as different as objectivity and subjectivity. They are fundamental attitudes. The trick, in design, is to change your attitude, from moment to moment.” Under Yamanaka’s tutelage, you learn to switch gears, not grind them.

Eventually Yamanaka lost interest in giving structure and design to existing technology. “I felt if one could directly nurture technological seeds, more imaginative products could be created.” It was the first step on a path toward designing for robot-human interaction.

2nd_Prototyping_in_Tokyo_exhibition_at_Japan_House_London_16Jan-17Mar2019_showing_work_by_Professor_Yamanaka_Shunji_Image©Jeremie_Souteyrat-(2)

Yamanaka – so punctilious, so polite – begins to relax, as he contemplates the work of his peers: Engineers are always developing robots that are realistic, in a linear way that associates life with things, he says, adding that they are obsessed with being more and more “real”. Consequently, he adds, a lot of their work is “horrible. They’re making zombies!”

Artists have already established a much better approach, he explains: quite simply, artists know how to sketch. They know how to reduce, and abstract. “From ancient times, art has been about the right line, the right gesture. Abstraction gets at reality, not by mimicking it, but by purifying it. By spotting and exploring what’s essential.”

Yamanaka’s robots don’t copy particular animals or people, but emerge from close observation of how living things move and behave. He is fascinated by how even unliving objects sometimes seem to transmit the presence of life or intelligence. “We have a sensitivity for what’s living and what’s not,” he observes. “We’re always searching for an element of living behaviour. If it moves, and especially if it responds to touch, we immediately suspect it has some kind of intellect. As a designer I’m interested in the elements of that assumption.”

So it is, inevitably, that the most unassuming machine turns out to hold the key to the whole exhibition. Apostroph is the fruit of a collaboration with Manfred Hild, at Sony’s Computer Science Laboratories in Paris. It’s a hinged body made up of several curving frames, suggesting a gentle logarithmic spiral.

Each joint contains a motor which is programmed to resist external force. Leave it alone, and it will respond to gravity. It will try to stand. Sometimes it expands into a broad, bridge-like arch; at other times it slides one part of itself through another, curls up and rolls away.

As an engineer, you always follow a line of logic, says Yamanaka. You think in a linear way. It’s a valuable way of proceeding, but unsuited to exploration. Armed with fragile, good-enough 3D-printed prototypes, Yamanaka has found a way to do without blueprints, responding to the models he makes as an artist would.

In this, he’s both playing to his strengths as a frustrated manga illustrator, and preparing his students for a future in which the old industrial procedures no longer apply. “Blueprints are like messages which ensure the designer and manufacturer are on the same page,” he explains. “If, however, the final material could be manipulated in real time, then there would be no need to translate ideas into blueprints.”

Rami---Additively-manufactured-running-specific-prosthetics_Image-©-KATO-Yasushi-

It’s a seductive spiel but I can’t help but ask what all these elegant but mostly impractical forms are all, well, for.

 

Yamanaka’s answer is that they’re to make the future bearable. “I think the perception of subtle lifelike behaviour is key to communication in a future full of intelligent machines,” he says. “Right now we address robots directly, guiding their operations. But in the future, with so many intelligent objects in our life, we’ll not have the time or the patience or even the ability to be so precise. Body language and unconscious communication will be far more important. So designing a lifelike element into our machines is far more important than just tinkering with their shape.”

By now we’ve left the gallery and are standing before Flagella, a mechanical mobile made for Yamanaka’s 2009 exhibition Bones, held in Tokyo Midtown. Flagella is powered by a motor with three units that repeatedly rotate and counter-rotate, its movements supple and smooth like an anemone. It’s hard to believe the entire machine is made from hard materials.

There’s a child standing in front of it. His parents are presumably off somewhere agonising over sake cups, dinky tea pots, bowls that cost a month’s rent. As we watch, the boy begins to dance, riffing off the automaton’s moves, trying to find gestures to match the weavings of the machine.

“This one is of no practical purpose whatsoever,” Yamanaka smiles. But he doesn’t really think that. And now, neither do I.

Praying to the World Machine

In late spring this year, the Barbican Centre in London will explore the promise and perils of artificial intelligence in a festival of films, workshops, concerts, talks and exhibitions. Even before the show opens, however, I have a bone to pick: what on earth induced the organisers to call their show AI: More than human?

More than human? What are we being sold here? What are we being asked to assume, about the technology and about ourselves?

Language is at the heart of the problem. In his 2007 book, The Emotion Machine, computer scientist Marvin Minsky deplored (although even he couldn’t altogether avoid) the use of “suitcase words”: his phrase for words conveying specialist technical detail through simple metaphors. Think what we are doing when we say metal alloys “remember” their shape, or that a search engine offers “intelligent” answers to a query.

Without metaphors and the human tendency to personify, we would never be able to converse, let alone explore technical subjects, but the price we pay for communication is a credulity when it comes to modelling how the world actually works. No wonder we are outraged when AI doesn’t behave intelligently. But it isn’t the program playing us false, rather the name we gave it.

Then there is the problem outlined by Benjamin Bratton, director of the Center for Design and Geopolitics at the University of California, San Diego, and author of cyber bible The Stack. Speaking at Dubai’s Belief in AI symposium last year, he said we use suitcase words from religion when we talk about AI, because we simply don’t know what AI is yet.

For how long, he asked, should we go along with the prevailing hype and indulge the idea that artificial intelligence resembles (never mind surpasses) human intelligence? Might this warp or spoil a promising technology?

The Dubai symposium, organised by Kenric McDowell and Ben Vickers, interrogated these questions well. McDowell leads the Artists and Machine Intelligence programme at Google Research, while Vickers has overseen experiments in neural-network art at the Serpentine Gallery in London. Conversations, talks and screenings explored what they called a “monumental shift in how societies construct the everyday”, as we increasingly hand over our decision-making to non-humans.

Some of this territory is familiar. Ramon Amaro, a design engineer at Goldsmith, University of London, drew the obvious moral from the story of researcher Joy Buolamwini, whose facial-recognition art project The Aspire Mirror refused to recognise her because of her black skin.

The point is not simple racism. The truth is even more disturbing: machines are nowhere near clever enough to handle the huge spread of normal distribution on which virtually all human characteristics and behaviours lie. The tendency to exclude is embedded in the mathematics of these machines, and no patching can fix it.

Yuk Hui, a philosopher who studied computer engineering and philosophy at the University of Hong Kong, broadened the lesson. Rational, disinterested thinking machines are simply impossible to build. The problem is not technical but formal, because thinking always has a purpose: without a goal, it is too expensive a process to arise spontaneously.

The more machines emulate real brains, argued Hui, the more they will evolve – from autonomic response to brute urge to emotion. The implication is clear. When we give these recursive neural networks access to the internet, we are setting wild animals loose.

Although the speakers were well-informed, Belief in AI was never intended to be a technical conference, and so ran the risk of all such speculative endeavours – drowning in hyperbole. Artists using neural networks in their practice are painfully aware of this. One artist absent from the conference, but cited by several speakers, was Turkish-born Memo Akten, based at Somerset House in London.

His neural networks make predictions on live webcam input, using previously seen images to make sense of new ones. In one experiment, a scene including a dishcloth is converted into a Turneresque animation by a recursive neural network trained on seascapes. The temptation to say this network is “interpreting” the view, and “creating” art from it, is well nigh irresistible. It drives Akten crazy. Earlier this year in a public forum he threatened to strangle a kitten whenever anyone in the audience personified AI, by talking about “the AI”, for instance.

It was left to novelist Rana Dasgupta to really put the frighteners on us as he coolly unpicked the state of automated late capitalism. Today, capital and rental income are the true indices of political participation, just as they were before the industrial revolution. Wage rises? Improved working conditions? Aspiration? All so last year. Automation has  made their obliteration possible, by reducing to virtually nothing the costs of manufacture.

Dasgupta’s vision of lives spent in subjection to a World Machine – fertile, terrifying, inhuman, unethical, and not in the least interested in us – was also a suitcase of sorts, too, containing a lot of hype, and no small amount of theology. It was also impossible to dismiss.

Cultural institutions dabbling in the AI pond should note the obvious moral. When we design something we decide to call an artificial intelligence, we commit ourselves to a linguistic path we shouldn’t pursue. To put it more simply: we must be careful what we wish for.