New flavours of intelligence

Reading about AI and governance for New Scientist, 13 September 2023

A sorcerer’s apprentice decides to use magic to help clean his master’s castle. The broom he enchants works well, dousing the floors with pails full of water. When the work is finished, the apprentice tries to stop the broom. Then, he tries to smash the broom. But the broom simply splits and regrows, working twice as hard as before, four times as hard, eight times as hard… until the rooms are awash and the apprentice all but drowns.

I wonder if Johann Wolfgang von Goethe’s 1797 poem sprang to mind as Mustafa Suleyman (co-founder of AI pioneers DeepMind, now CEO of Inflection AI) composed his new book, The Coming Wave? Or perhaps the shade of Robert Oppenheimer darkened Suleyman’s descriptions of artificial intelligence, and his own not insignificant role in its rise? “Decades after their invention,” he muses, “the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident.”

Suleyman and his peers, having launched artificially intelligent systems upon the world, are right to tremble. At one point Suleyman compares AI to “an evolutionary burst like the Cambrian explosion, the most intense eruption of new species in the Earth’s history.”

The Coming Wave is mostly about the destabilising effects of new technologies. It describes a wildly asymmetric world where a single quantum computer can render the world’s entire encryption infrastructure redundant, and an AI mapping new drugs can be repurposed to look for toxins at the press of a return key.

Extreme futures beckon: would you prefer subjection under an authoritarian surveillance state, or radical self-reliance in a world where “an array of assistants… when asked to create a school, a hospital, or an army, can make it happen in a realistic timeframe”?

The predatory city states dominating this latter, neo-Renaissance future may seem attractive to some. Suleyman is not so sure: “Renaissance would be great,” he writes; “unceasing war with tomorrow’s military technology, not so much.”

A third future possibility is infocalypse, “where the information ecosystem grounding knowledge, trust, and social cohesion… falls apart.”

We’ll come back to this.

As we navigate between these futures, we should stay focused on current challenges. “I’ve gone to countless meetings trying to raise questions about synthetic media and misinformation, or privacy, or lethal autonomous weapons,” Suleyman complains, “and instead spent the time answering esoteric questions from otherwise intelligent people about consciousness, the Singularity, and other matters irrelevant to our world right now.”

Historian David Runciman makes an analogous point in The Handover, an impressive (and impressively concise) history of the limited liability company and the modern nation state. The emergence of both “artificial agents” at the end of the 18th Century was, Runciman argues, “the first Singularity”, when we tied our individual fates to two distinct but compatible autonomous computational systems.

“These bodies and institutions have a lot more in common with robots than we might think,” argues Runciman. Our political systems are already radically artificial and autonomous, and if we fail to appreciate this, we won’t understand what to do, or what to fear, when they acquire new flavours of intelligence.

Long-lived, sustainable, dynamic states — ones with a healthy balance between political power and civil society — won’t keel over under the onslaught of helpful AI, Runciman predicts. They’ll embrace it, and grow increasingly automated and disconnected from human affairs. How will we ever escape this burgeoning machine utopia?

Well, human freedom may still be a force to reckon with, according to Igor Tulchinsky. Writing with Christopher Mason in The Age of Prediction, Tulchinsky explores why the more predictable world ushered in by AI may not necessarily turn out to be a safer one. Humans evolved to take risks, and weird incentives emerge whenever predictability increases and risk appears to decline.

Tulchinsky, a quant who analyzes the data flows in financial markets, and Mason, a geneticist who maps dynamics across human and microbial genomes, make odd bedfellows. Mason, reasonably enough, welcomes any advance that makes medicine more reliable. Tulchinsky fears lest perfect prediction in the markets renders humans as docile and demoralised as cattle. The authors’ spirited dialogue illuminates their detailed survey of what predictive technologies actually do, in theatres from warfare to recruitment, policing to politics.

Let’s say Tulchinsky and Mason are right, and that individual free will survives governance by all-seeing machines. It does not follow at all that human societies will survive their paternalistic attentions.

This was the unexpected sting in the tail delivered by Edward Geist in Deterrence under Uncertainty, a heavyweight but unexpectedly gripping examination of AI’s role in nuclear warfare.

Geist, steeped in the history and tradecraft of deception, reckons the smartest agent — be it meat or machine — can be rendered self-destructively stupid by an elegant bit of subterfuge. Fakery is so cheap, easy and effective, Geist envisions a future in which artificially intelligent “fog-of-war machines” create a world that favours neither beligerents nor conciliators, but deceivers: “those who seek to confound and mislead their rivals.”

In Geist’s hands, Suleyman’s “infocalypse” becomes a weapon, far cleaner and cheaper than any mere bomb. Imagine future wars fought entirely through mind games. In this world of shifting appearances, littered with bloody accidents and mutual misconstruals, people are persuaded that their adversary does not want to hurt them. Rather than living in fear of retaliation, they come to realise the adversary’s values are, and always have been, better than their own.
Depending on your interests, your politics, and your sensitivity to disinformation, you may well suspect that this particular infocalyptic future is already upon us.

And, says Geist, at his most Machiavellian (he is the most difficult of the writers here; also the most enjoyable): “would it not be much more preferable for one’s adversaries to decide one had been right all along, and welcome one’s triumph?”

 

A lawyer scenting blood

Reading Unwired by Gaia Bernstein for New Statesman, 15 May 2023

In 2005, the journal Obesity Research published a study that, had we but known it, told us everything we needed to know about our coming addiction to digital devices.

The paper, “Bottomless Bowls: Why Visual Cues of Portion Size May Influence Intake” was about soup. Researchers led by Brian Wansink of Cornell University invited volunteers to lunch. One group ate as much soup as they wanted from regular bowls. The other ate from bowls that were bolted to the table and refilled automatically from below. Deprived of the “stopping signal” of an empty bowl, this latter group ate 73 per cent more than the others — and had no idea that they had over-eaten.

It’s a tale that must haunt the dreams of Asa Raskin, the man who invented, then publically regretted, “infinite scroll”. That’s the way mobile phone apps (from Facebook to Instagram, Twitter to Snapchat) provide endless lists of fresh content to the user, regardless of how much content has already been consumed.

Gaia Bernstein, a law professor at Seton Hall, includes infinite scroll in her book’s catalogue of addicting smart-device features. But this is as much about what these devices don’t do. For instance in his 2022 book Lost Focus Johann Hari wonders why Facebook never tells you which of your friends are nearby and up for a coffee. Well, the answer’s obvious enough: because lonely people, self-medicating with increasing quantities of social media, are Facebook’s way of making money.

What do we mean when we say that our mobile phones and tablets and other smart devices are addicting?

The idea of behavioural addiction was enshrined in DSM-5, the manual of mental disorders issued by the American Psychiatric Association, in 2015. DSM-5 is a bloated beast, and yet its flaky-sounding “Behavioral Addictions” — that, on the face of it, could make a mental disorder of everything we like to do — have proved remarkably robust, as medicine reveals how addictions, compulsions and enthusiasms share the same neurological pathways. You can addict humans (and not just humans) to pretty much anything. All you need to do is weaponise the environment.

And the environment, according to Bernstein’s spare, functional and frightening account, is most certainly weaponised. Teenagers, says Bernsteins, spend barely a third of the time partying that they used to in the 1980s, and the number of teens who get together with their friends has halved between 2000 and 2015. If ever there was a time to market a service to lonely people by making them more lonely, it’s now.

For those of us who want to sue GAMA (Google, Amazon, Meta, Apple) for our children’s lost childhood, galloping anxiety, poor impulse control, obesity, insomnia and raised suicide risk, the challenge is to demonstrate that it’s screentime that’s done all this damage to how they feel, and how they behave. And that, in an era of helicopter-parenting, is hard to do. danah boyd’s 2014 book It’s Complicated shows how difficult it’s going to be to separate the harms inflicted by little Johnny’s iPhone from all the benefits little Johnny enjoys. To hear boyd tell it, teenagers “obsessed” with social media are simply trying to recreate, for themselves and each other, a social space denied them by anxious parents, hostile authorities, and a mass media bent on exaggerating every conceivable out-of-doors danger.

The Covid pandemic has only exacerbated the stay-at-home, see-no-one trend among young people. Children’s average time online doubled from three to six hours during lockdown. It use to be that four per cent of children spent more than eight hours a day in front of a smart screen. Now over a quarter of them do.

Nor have we merely inherited this dismal state of affairs; we’ve positively encouraged it, stuffing our schools with technological geegaws in the fond and (as it turns out) wildly naive belief that I.T. will improve and equalise classroom performance. (It doesn’t, which this is why Silicon Valley higher-ups typically send their children to Waldorf schools, which use chalk, right up until the eighth grade.)

Bernstein, who regularly peppers an otherwise quite dry account with some eye-popping personal testimony, recalls meeting one mum whose son was set to studying history through a Roblox game mode called Assassin’s Creed Odyssey (set in ancient Greece). “Since then, whenever she asks him to get off Roblox, he insists it is homework.”

Bernstein believes there’s more to all this than a series of unfortunate events. She thinks the makers of smart devices knew exactly what they were doing, as surely as the tobacco companies knew that the cigarettes they manufactured caused cancer.

Bernstein reckons we’re at a legal tipping point: this is her playbook for making GAMA pay for addicting us to glass.

Here’s what we already know about how companies respond to being caught out in massive wrong-doing.

First, they ignore the problem. (In 2018 an internal Facebook presentation warned: “Our algorithm exploits the human brain’s attraction to divisiveness… If left unchecked [it would feed users] more and more divisive content to gain user attention & increase time on the platform.” Mark Zuckerberg responded by asking his people “not to bring something like that to him again”.)

Then they deny there’s a problem. Then they go to war with the science, refuting critical studies and producing their own. Then, they fend off public criticism — and place responsibility on the consumer — by offering targeted solutions. (At least the filter tips added to cigarettes were easy to use. Most “parental controls” on smart devices are so cumbersome and inaccessible as to be unuseable.) Finally, they offer to create a system of self-regulation — by which time, Bernstein reckons, you’ve won, or you will have won, so long as you have proven that the people you’re going after intended, all along, to addict their customers.

You might, naively, imagine that this matter rests upon the science. It doesn’t, and Bernstein’s account of the screentime science wars is quite weak — a shallow confection built largely of single studies.

The scientific evidence is stronger than Bernstein makes it sound, but there’s still a problem: it’ll take a generation to consolidate. There are other, better ways to get at the truth in a timely manner; for instance, statistics, which will tell you that we have the largest ever recorded epidemic of teenage mental health problems, whose rising curves correlate with terrifying neatness with the launch of various social media platforms.

Bernstein is optimistic: “Justifying legal interventions,” she says, “is easier when the goal is to correct a loss of autonomy”, and this after all, is the main charge she’s laying at GAMA’s door: that these companies have created devices that rob us of our will, leaving us ever more civically and psychologically inept, the more we’re glued to their products.

Even better (at least from the point of view of a lawyer scenting blood), we’re talking about children. “Minors are the Achilles heel,” Bernstein announces repeatedly, and with something like glee. Remember how the image of children breathing in their parents’ second-hand smoke broke big tobacco? Well, just extend the analogy: here we have a playground full of kids taking free drags of Capstans and Players No. 6.

Unwired is not, and does not aspire to be, a comprehensive account of the screen-addiction phenomenon. It exists to be used: an agenda for social change through legal action. It is a knife, not a brush. But it’ll be of much more than academic value to those of us whose parenting years were overshadowed by feelings of guilt, frustration and anxiety, as we fought our hopeless battles, and lost our children to TikTok and Fortnite.

The mind unlocked

Reading The Battle for Your Brain by Nita Farahany for New Scientist, 19 April 2023

Iranian-American ethicist and lawyer Nita Farahany is no stranger to neurological intervention. She has sought relief from her chronic migraines in “triptans, anti-seizure drugs, antidepressants, brain enhancers, and brain diminishers. I’ve had neurotoxins injected into my head, my temples, my neck, and my shoulders; undergone electrical stimulation, transcranial direct current stimulation, MRIs, EEGs, fMRIs, and more.”

Few know better than Farahany what neurotechnology can do for people’s betterment, and this lends weight to her sombre and troubling account of a field whose speed of expansion alone should give us pause.

Companies like Myontec, Athos, Delsys and Noraxon already offer electromyography-generated insights to athletes and sports therapists. Control Bionics sells NeuroNode, a wearable EMG device for patients with degenerative neurological disorders, enabling them to control a computer, tablet, or motorised device. Neurable promises “the mind unlocked” with its “smart headphones for smarter focus.” And that’s before we even turn to the fast-growing interest in implantable devices; Synchron, Blackrock Neurotech and Elon Musk’s Neuralink all have prototypes in advanced stages of development.

Set aside the legitimate medical applications for a moment; Farahany is concerned that neurotech applications that used to let us play video games, meditate, or improve our focus have opened the way to a future of brain transparency “in which scientists, doctors, governments, and companies may peer into our brains and minds at will.”

Think it can’t be done? Think again. In 2017 A research team led by UC Berkeley computer scientist Dawn Song reported an experiment in which videogamers used a neural interface to control a video game. As they played, the researchers inserted subliminal images into the game and watched for unconscious recognition signals. This game of neurological Battleships netted them one player’s credit card PIN code — and their home address.

Now Massachusetts-based Brainwave Science is selling a technology called iCognative, which can extract information from people’s brains. At least, suspects are shown pictures related to crimes and cannot help but recognise whatever they happen to recognise. For example, a murder weapon. Emirati authorities have already successfully prosecuted two cases using this technology.

This so-called “brain fingerprinting” technique is as popular with governments (Bangladesh, India, Singapore, Australia) as it is derided by many scientists.

More worrying are the efforts of companies, in the post-Covid era, to use neurotech in their continuing effort to control the home-working environment. So-called “bossware” programmes already take regular screenshots of employees’ work, monitor their keystrokes and web usage, and photograph them at (or not at) their desks. San Francisco bioinformatics company Emotiv now offers to help manage your employees’ attention with its MN8 earbuds. These can indeed be used to listen to music or participate in conference calls — and also, with just two electrodes, one in each ear, they claim to be able to record employees’ emotional and cognitive functions in real time.

It’ll come as no surprise if neurotech becomes a requirement in modern workplaces: no earbuds, no job. This sort of thing has happened many times already.

“As soon as [factory] workers get used to the new system their pay is cut to the former level,” complained Vladimir Lenin in 1912. “The capitalist attains an enormous profit for the workers toil four times as hard as before and wear down their nerves and muscles four times as fast as before.”

Six years later, he approved funding for a Taylorist research institute. Say what you like about industrial capitalism, its logic is ungainsayable.

Farahany has no quick fixes to offer for this latest technological assault on the mind — “the one place of solace to which we could safely and privately retreat”. Her book left me wondering what to be more afraid of: the devices themselves, or the glee with which powerful institutions seize upon them.

Unoaku lives alone

Watching Mika Rottenberg and Mahyad Tousi’s Remote for New Scientist, 26 October 2022

From her high-rise in a future Kuala Lumpur, where goods flow freely, drone-propelled, while people stay trapped in their apartments, Unoaku (in a brilliant, almost voiceless performance by Okwui Okpokwasili) ekes out her little life. There are herbs on her windowsill, and vegetables growing in hydroponic cabinets built into her wall. If she’s feeling lazy, a drone will deliver her a meal that she can simply drop, box and all, into boiling water. Unoaku’s is a world of edible packaging and smart architecture, living rugs (she spritzes them each day) and profound loneliness. Unoaku lives alone — and so does everybody else.

Though Remote was filmed during the Covid-19 lockdowns, it would be a mistake to consider this just another “lockdown movie”. Unoaku’s world is by no stretch a world in crisis, still less a dystopia. Her vibrantly decorated apartment (I want her wallpaper and so will you) is more refuge than prison, its walls moving to accommodate their occupant, giving Unoaku at least the illusion of space. Had it not been for Covid, we would probably be viewing this woman’s life as a relatively positive metaphor for what it would be like to embark on a long space journey. One imagines Lunar or Martian settlers of the future settling for much less.

Hers is, however, a little life: reduced to self-care, to hours spent gesturing at a blank wall (she’s an architect, working in VR), and to evenings sprawled in front of Eun-ji and Soju, a Korean dog-grooming show (Soju is the terrier, Eun-ji (Joony Kim) its ebullient owner).

Then things start to go very slightly wrong. Unoaku’s pan is returned dirty from the cleaning service. Eun-ji turns up drunk to her live show. Unoaku notices that the goofy clock on Eun-ji’s wall has started to run backwards. When she points this out on the chat platform running beside the programme, she triggers a stream of contempt from other viewers.

Unoaku is far more fragile that we thought. Now, when she leans out her window, bashing her cooking pan with a wooden spoon, celebrating — well, something; maybe just the fact of being alive and being able to hear other human beings — she is left shaking, her face wet with tears.

Soon other women are contacting her. They too have been watching Eun-ji and Soju. They too can see the clock going backwards on the dog-groomer’s wall. Bit by bit, a kind of community emerges.

Commissioned by the arts non-profit ArtAngel in the UK and a consortium of international galleries, Remote is that rare thing, an “art movie”. It belongs to a genre that became economically unviable with the advent of streaming and has been largely forgotten. (“Where are today’s Peter Greenaways and Derek Jarmans?” is a question that may not even compute for some readers, though these figures towered over the “arts & ents” pages of decades past.)

Director Mika Rottenberg, an artist working in upstate New York, is best known for her short, cryptic, funny video works like Sneeze (2012), in which well-dressed men with throbbing noses sneeze out live rabbits, steaks and lightbulbs. Her co-director Mahyad Tousi has a more mainstream screen background: he was the executive producer of CBS primetime comedy United States of Al and is currently writing a sci-fi adaptation of The Tales from a Thousand and One Nights.

One can’t expect this pair to revive the art movie overnight, of course, but Remote offers up an excellent argument for making the attempt. Like a modern Japanese or Korean short story, Remote explores the tiny bounds of an ordinary-seeming urban life, hemmed in by technology and consumption, and it surprises a world of deep feeling bubbling just beneath the surface.

How to prevent the future

Reading Gerd Gigerenzer’s How to Stay Smart in a Smart World for the Times, 26 February 2022

Some writers are like Moses. They see further than everybody else, have a clear sense of direction, and are natural leaders besides. These geniuses write books that show us, clearly and simply, what to do if we want to make a better world.

Then there are books like this one — more likeable, and more honest — in which the author stumbles upon a bottomless hole, sees his society approaching it, and spends 250-odd pages scampering about the edge of the hole yelling at the top of his lungs — though he knows, and we know, that society is a machine without brakes, and all this shouting comes far, far too late.

Gerd Gigerenzer is a German psychologist who has spent his career studying how the human mind comprehends and assesses risk. We wouldn’t have lasted even this long as a species if we didn’t negotiate day-to-day risks with elegance and efficiency. We know, too, that evolution will have forced us formulate the quickest, cheapest, most economical strategies for solving our problems. We call these strategies “heuristics”.

Heuristics are rules of thumb, developed by extemporising upon past experiences. They rely on our apprehension of, and constant engagement in, the world beyond our heads. We can write down these strategies; share them; even formalise them in a few lines of light-weight computer code.

Here’s an example from Gigerenzer’s own work: Is there more than one person in that speeding vehicle? Is it slowing down as ordered? Is the occupant posing any additional threat?

Abiding by the rules of engagement set by this tiny decision tree reduces civilian casualties at military checkpoints by more than sixty per cent.

We can apply heuristics to every circumstance we are likely to encounter, regardless of the amount of data available. The complex algorithms that power machine learning, on the other hand, “work best in well-defined, stable situations where large amounts of data are available”.

What happens if we decide to hurl 200,000 years of heuristics down the toilet, and kneel instead at the altar of occult computation and incomprehensibly big data?

Nothing good, says Gigerenzer.

How to Stay Smart is a number of books in one, none of which, on its own, is entirely satisfactory.

It is a digital detox manual, telling us how our social media are currently weaponised, designed to erode our cognition (but we can fill whole shelves with such books).

It punctures many a rhetorical bubble around much-vaunted “artificial intelligence”, pointing out how easy it is to, say, get a young man of colour charged without bail using proprietary risk-assessment software. (In some notorious cases the software had been trained on, and so was liable to perpetuate, historical injustices.) Or would you prefer to force an autonomous car to crash by wearing a certain kind of T-shirt? (Simple, easily generated pixel patterns cause whole classes of networks to draw bizarre inferential errors about the movement of surrounding objects.) This is enlightening stuff, or it would be, were the stories not quite so old.

One very valuable section explains why forecasts derived from large data sets become less reliable, the more data they are given. In the real world, problems are unbounded; the amount of data relevant to any problem is infinite. This is why past information is a poor guide to future performance, and why the future always wins. Filling a system with even more data about what used to happen will only bake in the false assumptions that are already in your system. Gigerenzer goes on to show how vested interests hide this awkward fact behind some highly specious definitions of what a forecast is.

But the most impassioned and successful of these books-within-a-book is the one that exposes the hunger for autocratic power, the political naivety, and the commercial chicanery that lie behind the rise of “AI”. (Healthcare AI is a particular bugbear: the story of how the Dutch Cancer Society was suckered into funding big data research, at the expense of cancer prevention campaigns that were shown to work, is especially upsetting).

Threaded through this diverse material is an argument Gigerenzer maybe should have made at the beginning: that we are entering a new patriarchal age, in which we are obliged to defer, neither to spiritual authority, nor to the glitter of wealth, but to unliving, unconscious, unconscionable systems that direct human action by aping human wisdom just well enough to convince us, but not nearly well enough to deliver happiness or social justice.

Gigerenzer does his best to educate and energise us against this future. He explains the historical accidents that led us to muddle cognition with computation in the first place. He tells us what actually goes on, computationally speaking, behind the chromed wall of machine-learning blarney. He explains why, no matter how often we swipe right, we never get a decent date; he explains how to spot fake news; and he suggests how we might claw our minds free of our mobile phones.

But it’s a hopeless effort, and the book’s most powerful passages explain exactly why it is hopeless.

“To improve the performance of AI,” Gigerenzer explains, “one needs to make the physical environment more stable and people’s behaviour more predictable.”

In China, the surveillance this entails comes wrapped in Confucian motley: under its social credit score system, sincerity, harmony and wealth creation trump free speech. In the West the self-same system, stripped of any ethic, is well advanced thanks to the efforts of the credit-scoring industry. One company, Acxiom, claims to have collected data from 700 million people worldwide, and up to 3000 data points for each individual (and quite a few are wrong).

That this bumper data harvest is an encouragement to autocratic governance hardly needs rehearsing, or so you would think.

And yet, in a 2021 study of 3,446 digital natives, 96 per cent “do not know how to check the trustworthiness of sites and posts.” I think Gigerenzer is pulling his punches here. What if, as seems more likely, 96 per cent of digital natives can’t be bothered to check the trustworthiness of sites and posts?

Asked by the author in a 2019 study how much they would be willing to spend each month on ad-free social media — that is, social media not weaponised against the user — 75 per cent of respondents said they would not pay a cent.

Have we become so trivial, selfish, short-sighted and penny-pinching that we deserve our coming subjection? Have we always been servile at heart, for all our talk of rights and freedoms; desperate for some grown-up come tug at our leash, and bring us to heal?

You may very well think so. Gigerenzer could not possibly comment. He does, though, remark that operant conditioning (the kind of learning explored in the 1940s by behaviourist B F Skinner, that occurs through rewards and punishments) has never enjoyed such political currency, and that “Skinner’s dream of a society where the behaviour of each member is strictly controlled by reward has become reality.”

How to Stay Smart in a Smart World is an optimistic title indeed for a book that maps, with passion and precision, a hole down which we are already plummeting.

“A perfect storm of cognitive degradation”

Reading Johann Hari’s Stolen Focus: Why you can’t pay attention for the Telegraph, 2 January 2022

Drop a frog into boiling water, and it will leap from the pot. Drop it into tepid water, brought slowly to the boil, and the frog will happily let itself be cooked to death.

Just because this story is nonsense, doesn’t mean it’s not true — true of people, I mean, and their tendency to acquiesce to poorer conditions, just so long as these conditions are introduced slowly enough. (Remind yourself of this next time you check out your own groceries at the supermarket.)

Stolen Focus is about how our environment is set up to fracture our attention. It starts with our inability to set the notifications correctly on our mobile phones, and ends with climate change. Johann Hari thinks a huge number of pressing problems are fundamentally related, and that the human mind is on the receiving end of what amounts to a denial-of-service attack. One of Hari’s many interviewees is Earl Miller from MIT, who talks about “a perfect storm of cognitive degradation, as a result of distraction”; to which Hari adds the following, devastating gloss: “We are becoming less rational less intelligent, less focused.”

To make such a large argument stick, though, Hari must ape the wicked problem he’s addressing: he must bring the reader to a slow boil.

Stolen Focus begins with an extended grumble about how we don’t read as many books as we used to, or buy as many newspapers, and how we are becoming increasingly enslaved to our digital devices. Why we should listen to Hari in particular, admittedly a latecomer to the “smartphones bad, books good” campaign, is not immediately apparent. His account of his own months-long digital detox — idly beachcombing the shores of Provincetown at the northern tip of Cape Cod, War and Peace tucked snugly into his satchel — is positively maddening.

What keeps the reader engaged are the hints (very well justified, it turns out) that Hari is deliberately winding us up.

He knows perfectly well that most of us have more or less lost the right to silence and privacy — that there will be no Cape Cod for you and me, in our financial precarity.

He also knows, from bitter experience, that digital detoxes don’t work. He presents himself as hardly less of a workaholic news-freak than he was before taking off to Massachusetts.

The first half of Stolen Focus got me to sort out my phone’s notification centre, and that’s not nothing; but it is, in the greater scheme of Hari’s project, hardly more than a parody of the by now very familiar “digital diet book” — the sort of book that, as Hari eventually points out, can no more address the problems filling this book than a diet book can address epidemic obesity.

Many of the things we need to do to recover our attention and focus “are so obvious they are banal,” Hari writes: “slow down, do one thing at a time, sleep more… Why can’t we do the obvious things that would improve our attention? What forces are stopping us?”

So, having had his fun with us, Hari begins to sketch in the high sides of the pot in which he finds us being coddled.

The whole of the digital economy is powered by breaks in our attention. The finest minds in the digital business are being paid to create ever-more-addicting experiences. According to former Google engineer Tristan Harris, “we shape more than eleven billion interruptions to people’s lives every day.” Aza Raskin, co-founder of the Center for Humane Technology, calls the big tech companies “the biggest perpetrators of non-mindfulness in the world.”

Social media is particularly insidious, promoting outrage among its users because outrage is wildly more addictive than real news. Social media also promotes loneliness. Why? Because lonely people will self-medicate with still more social media. (That’s why Facebook never tells you which of your friends are nearby and up for a coffee: Facebook can’t make money from that.)

We respond to the anger and fear a digital diet instils with hypervigilance, which wrecks our attention even further and damages our memory to boot. If we have children, we’ll keep them trapped at home “for their own safety”, though our outdoor spaces are safer than they have ever been. And when that carceral upbringing shatters our children’s attention (as it surely will), we stuff them with drugs, treating what is essentially an environmental problem. And on and on.

And on. The problem is not that Stolen Focus is unfocused, but that it is relentless: an unfeasibly well-supported undergraduate rant that swells — as the hands of the clock above the bar turn round and the beers slide down — to encompass virtually every ill on the planet, from rubbish parenting to climate change.

“If the ozone layer was threatened today,” writes Hari, “the scientists warning about it would find themselves being shouted down by bigoted viral stories claiming the threat was all invented by the billionaire George Soros, or that there’s no such thing as the ozone layer anyway, or that the holes were really being made by Jewish space lasers.”

The public campaign Hari wants Stolen Focus to kick-start (there’s an appendix; there’s a weblink; there’s a newsletter) involves, among other things, a citizen’s wage, outdoor play, limits on light pollution, public ownership of social media, changes in the food supply, and a four-day week. I find it hard to disagree with any of it, but at the same time I can’t rid myself of the image of how, spiritually refreshed by War and Peace, consumed in just a few sittings in a Provincetown coffee shop, Hari must (to quote Stephen Leacock) have “flung himself from the room, flung himself upon his horse and rode madly off in all directions”.

If you read just one book about how the modern world is driving us crazy, read this one. But why would you read just one?

An inanimate object worshipped for its supposed magical powers

Watching iHuman dircted by Tonje Hessen Schei for New Scientist, 6 January 2021

In 2010 she made Play Again, exploring digital media addiction among children. In 2014 she won awards for Drone, about the CIA’s secret role in drone warfare.

Now, with iHuman, Tonje Schei, a Norwegian documentary maker who has won numerous awards for her explorations of humans, machines and the environment, tackles — well, what, exactly? iHuman is a weird, portmanteau diatribe against computation — specifically, that branch of it that allows machines to learn about learning. Artificial general intelligence, in other words.

Incisive in parts, often overzealous, and wholly lacking in scepticism, iHuman is an apocalyptic vision of humanity already in thrall to the thinking machine, put together from intellectual celebrity soundbites, and illustrated with a lot of upside-down drone footage and digital mirror effects, so that the whole film resembles nothing so much as a particularly lengthy and drug-fuelled opening credits sequence to the crime drama Bosch.

That’s not to say that Schei is necessarily wrong, or that our Faustian tinkering hasn’t doomed us to a regimented future as a kind of especially sentient cattle. The film opens with that quotation from Stephen Hawking, about how “Success in creating AI might be the biggest success in human history. Unfortunately, it might also be the last.” If that statement seems rather heated to you, go visit Xinjiang, China, where a population of 13 million Turkic Muslims (Uyghurs and others) are living under AI surveillance and predictive policing.

Not are the film’s speculations particularly wrong-headed. It’s hard, for example, to fault the line of reasoning that leads Robert Work, former US under-secretary of defense, to fear autonomous killing machines, since “an authoritarian regime will have less problem delegating authority to a machine to make lethal decisions.”

iHuman’s great strength is its commitment to the bleak idea that it only takes one bad actor to weaponise artificial general intelligence before everyone else has to follow suit in their own defence, killing, spying and brainwashing whole populations as they go.

The great weakness of iHuman lies in its attempt to throw everything into the argument: :social media addiction, prejudice bubbles, election manipulation, deep fakes, automation of cognitive tasks, facial recognition, social credit scores, autonomous killing machines….

Of all the threats Schei identifies, the one conspicuously missing is hype. For instance, we still await convincing evidence that Cambrdige Analytica’s social media snake oil can influence the outcome of elections. And researchers still cannot replicate psychologist Michal Kosinski’s claim that his algorithms can determine a person’s sexuality and even their political leanings from their physiology.

Much of the current furore around AI looks jolly small and silly one you remember that the major funding model for AI development is advertising. Most every millennial claim about how our feelings and opinions can be shaped by social media is a retread of claims made in the 1910s for the billboard and the radio. All new media are terrifyingly powerful. And all new media age very quickly indeed.

So there I was hiding behind the sofa and watching iHuman between slitted fingers (the score is terrifying, and artist Theodor Groeneboom’s animations of what the internet sees when it looks in the mirror is the stuff of nightmares) when it occurred to me to look up the word “fetish”. To refresh your memory, a fetish is an inanimate object worshipped for its supposed magical powers or because it is considered to be inhabited by a spirit.

iHuman’s is a profoundly fetishistic film, worshipping at the altar of a God it has itself manufactured, and never more unctiously as when it lingers on the athletic form of AI guru Jürgen Schmidhuber (never trust a man in white Levis) as he complacently imagines a post-human future. Nowhere is there mention of the work being done to normalise, domesticate, and defang our latest creations.

How can we possibly stand up to our new robot overlords?

Try politics, would be my humble suggestion.