On Tuesday 2 June I had a chat with Jonathan Strahan for the Coode Street podcast.
On Tuesday 2 June I had a chat with Jonathan Strahan for the Coode Street podcast.
Star Trek first appeared on television on 8 September 1966. It has been fighting the gravitational pull of its own nostalgia ever since – or at least since the launch of the painfully careful spin-off Star Trek: The Next Generation 21 years later.
The Next Generation was the series that gave us shipboard counselling (a questionable idea), a crew that liked each other (a catastrophically mistaken idea) and Patrick Stewart as Jean-Luc Picard, who held the entire farrago together, pretty much single-handed, for seven seasons.
Now Picard is back, retired, written off, an embarrassment and a blowhard. And Star Trek: Picard is a triumph, praise be.
Something horrible has happened to the “synthetics” (read: robots) who, in the person of Lieutenant Commander Data (Brent Spiner, returning briefly here) once promised so much for the Federation. Science fiction’s relationship with its metal creations is famously fraught: well thought-through robot revolt provided the central premise for Battlestar Galactica and Westworld, while Dune, reinvented yet again later this year as a film by Blade Runner 2049‘s Denis Villeneuve, is set in a future that abandoned artificial intelligence following a cloudy but obviously dreadful conflict.
And there is a perfectly sound reason for this mayhem. After all, any machine flexible enough to do what a robot is expected to do is going to be flexible enough to down tools – or worse. What Picard‘s take on this perennial problem will be isn’t yet clear, but the consequences of all the Federation’s synthetics going haywire is painfully felt: it has all but abandoned its utopian remit. It is now just one more faction in a fast-moving, galaxy-wide power arena (echoes of the Trump presidency and its consequences are entirely intentional).
Can Picard, the last torchbearer of the old guard, bring the Federation back to virtue? One jolly well hopes so, and not too quickly, either. Picard is, whatever else we may say about it, a great deal of fun.
There are already some exciting novelties, though the one I found most intriguing may turn out to be a mere artefact of getting the show off the ground. Picard’s world – troubled by bad dreams quite as much as it is enabled by world-shrinking technology – is oddly surreal, discontinuous in ways that aren’t particularly confusing but do jar here and there.
Is the Star Trek franchise finally getting to grips with the psychological consequences of its mastery of time and space? Or did the producers simply shove as much plot as possible into the first episode to get the juggernaut rolling? The latter seems more likely, but I hold out hope.
The new show bears its burden of twaddle. The first episode features a po-faced analysis of Data’s essence. No, really. His essence. That’s a thing, now. How twaddle became an essential ingredient on The Next Generation – and now possibly Picard – is a mystery: the original Star Trek never felt the need to saddle itself with such single-use, go-nowhere nonsense. But by now, like a hold full of tribbles, the twaddle seems impossible to shake off (Star Trek: Discovery, I’m looking at you).
Oh, but why cavil? Stewart brings a new vulnerability and even a hint of bitterness to grit his seemlessly fluid recreation of Picard, and the story promises an exciting and fairly devastating twist to the show’s old political landscape. Picard, growing old disgracefully? Oh, please make it so!
Thanks (I assume) to the those indefatigable Head of Zeus people, who are even now getting my anthology We Robots ready for publication, I’m invited to this year’s Berlin International Literature Festival, to take part in Automatic Writing 2.0, a special programme devoted to the literary impact of artifical intelligence.
Amidst other mischief, on Sunday 15 September at 12:30pm I’ll be reading from a new story, The Overcast.
By the end of the show, I was left less impressed by artificial intelligence and more depressed that it had reduced my human worth to base matter. Had it, though? Or had it simply made me aware of how much I wanted to be base matter, shaped into being by something greater than myself? I was reminded of something that Benjamin Bratton, author of the cyber-bible The Stack, said in a recent lecture: “We seem only to be able to approach AI theologically.”
‘I gotta be me,’ Sammy Davis Jr. croons as the android Dolores Abernathy (Evan Rachel Wood) steadies her horse, stands up on her stirrups, takes aim with her Winchester, and picks off her human masters one by one.
The trailer’s out at last and the futuristic TV series Westworld is set to return in the spring. It’s a prescient show, but not in the ways you might expect. It’s not about robot domination. Westworld is about an uprising of pleasure cyborgs in a futuristic resort. It is, for all its gunplay, about love. And that makes it a very timely show indeed.
In the real world, robots are actually being designed to love us — to fill traditional caring roles for which we have neither the time, energy, nor resources. Robots are being built to help the elderly, nurse the sick and tend the children. Pundits often take this as evidence of our selfish, lazy, reprehensible present. But we’ve been working towards this moment for a very long time, and would it really be so very bad?
If you think that families should look after their own elderly, you’ll need to explain why in south-east Asia, traditionally a region of three- and even four-generation family units, nouveau-riche gated retirement communities are springing up like mushrooms after a spring rain. Perhaps the elderly don’t long to live among us, as we imagine. Perhaps poverty is the only thing nailing Grandma to the family couch. As for the sick, we’ve long since been consigning them to institutions, be they care homes, hospitals or hospices, where people who are better-trained promise to look after them.
The question is not whether we should employ robots. Given the lousiness of some institutions, why on earth wouldn’t we? The question is whether the robots we employ will be any good, and whether we can accept them as substitute humans. We’d like to think not, but there’s evidence to suggest that we’ll bond with even a basic machine far more easily than we’d like to believe.
In 2011, Takanori Shibata, a Japanese engineer, turned up on the coast of tsunami-wracked Fukushima and handed out around 80 robotic seal pups to the victims of the disaster. Refugees warmed to the robots: many have held on to them and continue to look after them. Shibata could have turned up with puppies, or kittens or guinea pigs and would probably have achieved greater therapeutic impact. But who has the money and time to feed and look after 80 animals in a disaster zone? Pets need care and attention — a point not lost on the residential homes that employ Shibata’s robot seals to comfort their elderly, often demented, charges. A single ‘Paro’ — an acronym that roughly translates as ‘personal robot — costs around $5,000. A real-life therapy dog may cost more than $50,000 over its lifetime.
Paro isn’t much of a robot. It can move its head, neck, eyelids, flippers and tail. It responds to the human voice and to touch. It understands simple words and phrases (the sort we use with pets and babies). It knows when it’s being treated well, and when it’s being roughly handled. Its cries (made from digitally sampled baby seal sounds) have a discernible emotional range. It’s old news —the first Paros were sold in 1998 — but it’s making headlines again this year because the ninth generation model is being assessed for use on long space journeys. Mars colonists, permanently deprived of wider human society, will find consolation in a robotic animal chosen for its inability to disappoint. Robot dogs are a let-down because we know what pet dogs are like. How many pet seals do you know? Paro’s very blandness is its point. Its easy, undemanding displays of personal affection reduce stress, anxiety, depression, wandering and aggression among the demented of 30 countries. It must be only a matter of time before Paro makes it into the ‘safe spaces’ on university campus.
Kaspar, designed by the University of Hertfordshire’s Adaptive Systems Research Group, is hardly more sophisticated in appear-ance: a bland foot-high doll in a check shirt. It’s not really a robot — more a mechanical puppet, controlled remotely by researchers. Its expressive minimalism and extreme simplicity reassure the children it plays with — those with severe autism or those who have suffered trauma and abuse.
According to Living with Robots, Paul Dumouchel and Luisa Damiano’s recent survey of social robotics, robots are likely to be stuck in this uncanny state for some time, while we try to codify what ‘behaving like a human being’ actually means. We have vast knowledge of ourselves as social beings, of course, evidenced by millennia of cultural output from Dream of the Red Chamber to Breaking Bad. What we lack is a high-level description of human behaviour of the sort that can find its way into computer code. We all know why we laugh, cry, blush and commit suicide, but we have not the slightest idea what laughing, crying, blushing and committing suicide are for. This is why social robots attract so much academic attention: they are an experimental apparatus, through which we study ourselves.
Countless robot nurse prototypes, with names like Terapio and Robear, are under trial. The problems they are meant to address are real. We have conquered disease to the point where people regularly stay healthy into their nineties. This is why the US has as many people over 85 as children under five and China has 100 million senior citizens to look after. Someone or something needs to look after us in our dotage. Then there are the edge cases: those social wrinkles we could conceivably iron out with robots, but not without consequence. Should we roll out sex robots to address the uneven gender ratios in China? Straight men right now have next to no opportunity for sexual companionship: don’t they deserve some comfort?
Not according to Kathleen Richardson and Erik Brilling, whose Campaign Against Sex Robots, launched in 2015, declares that sex with an animate object that lacks agency can only brutalise us. Notwithstanding that sex robots are a bit rubbish, this particular rabbit hole swallows academics by the ton.
Nations with the most intractable demographic problems are the ones most entranced by the promise of robotics. Japan’s population is crashing as a generation of young people eschews sex. A third of men under 30 have never dated. Women prefer singledom to the life of penury and drudgery afforded by Japanese marriage. A new book by Jennifer Robertson, Robo Sapiens Japanicus: Robots, Gender, Family, and the Japanese Nation, unpicks the Japanese government’s published blueprint for revitalising the nation’s households by 2025. If we can only build robots to do the housework, the argument runs, then women will have more time for having babies. Once again, technology is being promoted not because it ushers in the future but because it preserves the past. (A driverless car is still, after all, a car: not a bus or a train or a decent broadband link. And a robot servant is still a servant.)
On the one hand, robots are like Uber and the spinning jenny. They promise to increase production while preserving the institutions of capital. They’re disruptive right up to the point where something might happen to the money. A more intriguing threat is the one directed at our own social lives. Surrounded by dull, bland, easy-going robot companions, will we come to expect less of other people? Assisted, cared for, and even seduced silly by machines, will we lower our expectations around concepts like ‘conversation’, ‘care’, ‘companionship’ and ‘love’?
Paro and Kaspar are creepy not for what they are — clinical tools, improving the lives of vulnerable people — but for what they portend: a world in which you and I find Paro and Kaspar a sufficient substitute for other people. ‘Robotic companionship may seem a sweet deal,’ wrote the social scientist Sherry Turkle back in 2011, ‘but it consigns us to a closed world — the loveable as safe and made to measure.’ Will our constant association with such easy-going, selfless-because-characterless robots make us emotionally lazy?
We’ve imagined this sort of future many times. Hesiod was writing poems about ineluctable degeneration around 700 BC. H.G. Wells’s The Time Machine (1895) imagines a world in which the beautiful, sensitive people — the Eloi — have all the savvy of veal calves and ‘civilisation’ has turned out to be nothing but a process of self-domestication. And it’s true: civilisation is as much about forgetting, and attendant helplessness, as it is about learning. In my own lifetime, handwriting and mental arithmetic have gone to the wall, and the art of everyday literary nuance is being ousted by the application of quick, characterful emoji. Having to learn new skills is a nuisance. Having to dispense with skills already acquired is a little death: a diminution of the spirit.
The pioneering psychologist William James argued that what we want from a lover is that they really love us, and not simply behave as if they did. I hope that’s true. If we come to believe that the soul is nothing more than behaviour, then of course a robot will become just as good as a person. Why even bother to build better robots? An Eloi future beckons: all we have to do is lower our expectations.
Above the exhibits in the first room of Hello, Robot, a large sign asks: “Have you ever met a robot?” Easy enough. But the questions keep on coming, and by the end of the exhibition, we’re definitely not in Kansas any more: “Do you believe in the death and rebirth of things?” is not a question you want to answer in a hurry. Nor is my favourite, the wonderfully loaded “Do you want to become better than nature intended?”
That we get from start to finish of the show in good order, not just informed but positively exhilarated, is a testament to the wiliness of the three curating institutions: the Vitra Design Museum in Germany, the Design Museum Ghent in Belgium, and MAK in Austria.
One of the show’s advisors, architect Carlo Ratti, head of the MIT Senseable City Lab, nails the trouble with such shows: “Any environment, any city, any landscape can become a robot when it is equipped with sensors, actuators and intelligence.” By the time robots do useful work, they have vanished. Once, we called traffic lights “robots”, now, we barely see them.
Robots, an exhibition currently at London’s Science Museum, gets caught in this bind. By following a “science fiction becomes science fact” trajectory, it creates a show that gets more boring as you work your way through it. Hello, Robot is much cannier: it knows that while science fiction may spin off real artefacts now and again, it never becomes science fact. Does writing down a dream stop you dreaming? Of course not.
Hello, Robot is about design. Its curators explore not only what we have made, but also what we have dreamed. Fine art, speculative designs, commercial products, comic books and movie clips are arranged together to create a glimpse of the robot’s place in our lives and imaginations. Far from disappearing, robots seem more likely to be preparing a jail-break.
The longings, fantasies and anxieties that robots are meant to address are as ancient as they are unrealisable. The robot exists to do what we can imagine doing, but would rather not do. They were going to mow our lawns, now we’re glad of the exercise and we might prefer to have them feed our babies – or look after much older people, as Dan Chen’s 2012 End of Life Care Machine envisions.
This robot mechanically strokes a dying patient – a rather dystopian provocation, or so Chen thought until some visitors asked to buy one. Exhibited here, Chen’s piece is accompanied by a note he wrote: should he encourage people to leave family members alone in their final hours or deny them the comfort of a machine?
Hello, Robot asks difficult questions in a thrillingly designed setting. It is a show to take the children to (just try not to let them see your face in Room 3 as you check on a computer to see if your job’s about to be automated).
There’s a deep seriousness about this show; if design teaches us anything, it is that no one is ever in charge of the future. “The question of whether we need, or even like [robots] is not really ours to ask,” a wallboard opines. “Do we actually need smartphones? Ten years ago, most people would probably have answered no.” Our roles in this “lifeworld” of the future are still to be defined.
Catching the exhibition in Germany, I go round three times until it’s late. I adore industrial robot YuMi’s efforts to roll a ball up a steep incline, and I grin as I walk past a clip of the automated kitchen in Jacques Tati’s 1958 film Mon Oncle. Still, I can’t quite take my eyes off a 2005 photograph of a Chinese factory by Edward Burtynsky, who visited China’s shipyards and industrial plants. Identical figures performing identical actions remind me of iconic British newspaper sketches of weaving machines from the industrial revolution.
We have not outgrown the need for human regimentation – we simply outsource it to cheaper humans. Whether robots become cheap enough to undercut poor people, and what happens if they do, are big questions. But this show can bear them.
People are by far the easiest animals to train. Whenever you try to get some bit of technology to work better, you can be sure that you are also training yourself. Steadily, day by day, we are changing our behaviours to better fit with the limitations of our digital environment. Whole books have been written about this, but we keep making the same mistakes. On 6 November 2014, at Human Interactive, a day-long conference on human-machine interaction at Goldsmith’s College in London, Rodolphe Gelin, the research director of robot-makers Aldebaran, screened a video starring Nao, the company’s charming educational robot. It took a while before someone in the audience (not me) spotted the film’s obvious flaw: how come the mother is sweating away in the kitchen while the robot is enjoying quality time with her child?
We still obsess over the “labour-saving” capacities of our machines, still hanker after more always-elusive “free time”, but we never think to rethink the value of labour itself. This is the risk we run: that we will save ourselves from the very labour that makes our lives worthwhile.
Organised by William Latham and Frederic Fol Leymarie, Human Interactive was calculated (quite deliberately, I expect) to stir unease.
Beyond the jolly, anecdotal presentations about the computer games industry from Creative Assembly’s Guy Davidson and game designer Jed Ashforth, there emerged a rather unflattering vision of how humans best interact with machines. The biophysicist Michael Sternberg, for instance, is harnessing the wisdom of crowds to gamify and thereby solve difficult problems in systems biology and bioinformatics. For Sternberg’s purposes, people are effectively interchangeable components in a kind of meat parallel-processing system. Individually, we do have some merit: we are good at recognising and classifying patterns. Thisat least makes us better than pigeons, but only at the things that pigeons are good at already.
Sternberg would be mortified to see his work described in such terms – but this is the point: human projects, fed through the digital mill, emerge with their humanity stripped away. It’s up to people at the receiving end of the milling process to put the humanity back in. I wasn’t sure, listening to Nilli Lavie’s presentation on attention, to what human benefit her studies would be put. The UCL neuroscientist’s key point is well taken – that people perform best when they are neither overloaded with information, nor deprived of sufficient stimulus. But what did she mean by her claim that wandering attention loses the US economy around two billion dollars a year? Were American minds to be perfectly focused, all the year round, would that usher in some sort of actuarial New Jerusalem? Or would it merely extinguish all American dreaming? Without a space for minds to wander in, where would a new idea – any new idea – actually come from?
Not that ideas will save us. Ideas, in fact, got us into this mess in the first place, by reminding us that the world as-is is less than it could be. We are very good at dreaming up scenarios that we are not currently experiencing. We are all too capable of imagining elusive “perfect” experiences. Digital media feed these yearnings. There is something magical about a balanced spreadsheet, a glitchless virtual surface, the beauty of a symmetrical avatar under perfect, unreal light.
Henrietta Bowden-Jones, founder and director of the National Problem Gambling Clinic, is painfully aware of how digital media encourage our obessive and addictive behaviours. Games are hardly the new tobacco — at least, not yet — but psychologists are being hired to make them ever-more addictive; Bowden-Jones’s impressively understated presentation suggested that games may soon generate behavioral and social problems as acute as those thrown up by on-line gambling.
The day after the conference, Goldsmith’s College hosted Creative Machine, a week-long exhibition of machine creativity. In a church abutting the campus, robots sketched human skulls, balanced pendulums, and noodled around with evolutionary algorithms.I expected still more alienation, a surfeit of anxiety. In fact, Creative Machine left me feeling strangely reassured.
Those of us who play with computers, or know a little about science, harbour what amounts to a religious conviction: that that somewhere deep down, at the bottom of this messy reality, there is an order at work. Call it mathematics, or physics, or reason. Whichever way you cut it, we believe there’s a law. But this just isn’t true. Put a computer to work in the real world, and it messes up. More exciting still, it messes up in just the ways we would. Félix Luque Sánchez’s simple robots on rails shuttle backwards and forwards in a brave and ultimately futile attempt to balance a pendulum. Anyone who’s ever tried to balance a book on their head will recognise themselves in every move, every acceleration, every hesitation – every failure.
Even a robot who knows what it’s doing will get entangled. Patrick Tresset has programmed a robot called Paul with the rules of life drawing and draughtsmanship. Paul, presented with a still-life, follows these rules unthinkingly – and yet every picture it churns out is unique, shaped by tiny, unrepeatable fluctations in its environment (a snaggy biro, a heavy-footed passer-by, a cloud crossing the sun…).
If an emblem were needed for this show, then Cécile Babiole provides it. She has run the phrase “NE DOIS PAS COPIER” (literally: “one shouldn’t copy”) through a 3-D copier, over and over again, playing a familiar game of generational loss. And it’s the strangest thing: as they decay, her printed plastic letters take on organic form, become weeds, become coral, become limbs and organs. They lose their original meaning, only to acquire others. They do not become nothing, the way an over-photocopied picture becomes nothing. They become rich and strange.
Maths, rationality and science are magnificent tools with which to investigate the world. But we commit a massive and dangerous category error when we assume the world is built out of maths and reason.
With a conference to beat us, and an exhibition to entice us, Latham and Fol Leymarie have led us, without us ever really noticing, to a view of new kind of digital future. A future of approximations and mistakes and acts of bricolage. It is not a human future, particularly. But it is a future that accommodates us, and we should probably be grateful.
London, 1977: the international grandmaster Michael Stean is losing to Chess 4.6, a computer programme developed at Northwestern University, Illinois. Stean is steamed: he is losing. Chess 4.6 is, he says, “an iron monster”. When finally he admits defeat, however, he does so with grace, declaring 4.6 a genius.
Whether we’re leaving it all to the cat, or thrashing an Austin 1300 estate with a stick, we anthropomorphise as much of the world as we can. Twelve thousand years ago we took wild animals and fashioned them in our image: domestic cats have evolved babyish complexions to appeal to our love of cute.
Anthropomorphism, although apparently a sentimental tic, is central to what makes us human. A baby’s realisation that other people are more than animated furniture develops over time, prompted and reinforced by a pattern of exchanged glances. Long before children acquire this understanding (called theory of mind), they are fascinated by eyes, and by the direction of another’s gaze. We become human only because, early on, someone treated us as human.
How complex does something have to be before it passes as human? The answer seems to be not very. A consortium led by the University of Plymouth has just won a £4.7m grant to teach a humanoid robot named iCub how to speak English. Its theory of mind may depend less on intellectual potential than on the scientists’ willingness to treat their charge like a real infant.
Let’s hope it grows into a sociable little thing. The bald fact is, we need him. The US Census Bureau has estimated that the nation’s elderly population will more than double by 2050, to 80 million. But there are simply not enough young to look after them. A study by Saint Louis University, Missouri, shows robot dogs are as much of a comfort to the elderly as real dogs. In 30 years, robot carers will be required for practical help, as well as solace, for old people.
Sign up for Lab Notes – the Guardian’s weekly science update
Domestic robots are already big business. The sale of service robots in Japan is expected to top £5bn by 2015. Mind is the final hurdle, but robots don’t have to be as clever as us to care for us, converse with us, or accompany us. They just have to be clever enough. Our instinct for anthropomorphism will do the rest.
This, anyway, is the message of Love and Sex with Robots, a book by David Levy. A chess international master, Levy was driven by his passion for artificial intelligence to lead the team that created Converse – a programme which, in 1997, won the Loebner prize, an award for the most convincing computer conversationalist.
Now in his mid-60s, Levy is bringing artificial life to sex. “Humans long for affection and tend to be affectionate to those who offer it,” he says, and predicts that prostitution has only about another 20 years to run before robots take over. Robots with credibly human bodies are already here. Add minds clever enough to handle a little language, and how could we possibly avoid loving them?
Levy argues that robots will appeal to our better natures. It has already happened. Remember those Japanese toys you had to “feed” at all hours of the night? “A remarkable aspect of the Tamagotchi’s huge popularity,” writes Levy, “is that it possesses hardly any elements of character or personality, its great attraction coming from its need for almost constant nurturing.”
His book reminds us that humanity is an act: it is something we do. When our robots become pets, carers, even companions, we will, quite naturally, feel the urge to treat them well. When it comes to being human, we will give them the benefit of the doubt, the way we give the benefit of the doubt to our pets, our children, and each other.