More than the naughty world deserves

Reading Wikipedia @ 20, edited by Joseph Reagle and Jackie Koerner, for the Telegraph, 10 January 2021

In 2015 the US talk show host and comedian Stephen Colbert coined “truthiness”, one of history’s more chilling buzzwords. “We’re not talking about truth,” he declared, “we’re talking about something that seems like truth — the truth we want to exist.”

Colbert thought the poster-boy for our disinformation culture would be Wikipedia, the open-source internet encyclopedia started, more or less as an afterthought, by Jimmy Wales and Larry Sanger in 2001. If George Washington’s ownership of slaves troubles you, Colbert suggested “bringing democracy to knowledge” by editing his Wikipedia page.

Three years later the magazine Atlantic was calling Wikipedia “the last bastion of shared reality in Trump’s America”. Yes, its coverage is lumpy, idiosyncratic, often persnickety , and not terribly well written. But it’s accurate to a fault, extensive beyond all imagining, and energetically policed. (Wikipedia nixes toxic user content within minutes. Why can’t YouTube? Why can’t Twitter?)

Editors Joseph Reagle and Jackie Koerner — both energetic Wikipedians — know better than to go hunting for Wikipedia’s secret sauce. (A community adage goes that Wikipedia always works better in practice than in theory.) They neither praise nor blame Wikipedia for what it has become, but — and this comes across very strongly indeed — they love it with a passion. The essays they have selected for this volume (you can find the full roster of contributions on-line) reflect, always readably and almost always sympathetically, on the way this utopian project has bedded down in the flaws of the real world.

Wikipedia says it exists “to benefit readers by acting as an encyclopedia, a comprehensive written compendium that contains information on all branches of knowledge”. Improvements are possible. Wikipedia is shaped by the way its unvetted contributors write about what they know and delete what they do not. That women represent only about 12 per cent of the editing community is, then, not ideal.

Harder to correct is the wrinkle occasioned by language. Wikipedias written in different languages are independent of each other. There might not be anything actually wrong, but there’s certainly something screwy about the way India, Australia, the US and the UK and all the rest of the Anglophone world share a single English-language Wikipedia, while only the Finns get to enjoy the Finnish one. And it says something (obvious) about the unevenness of global development that Hindi speakers (the third largest language group in the world) read a Wikipedia that’s 53rd in a ranking of size.

To encyclopedify the world is an impossible goal. Surely the philosophes of eighteenth century France knew that much when they embarked on their Encyclopédie. Paul Otlet’s Universal Repertory and H. G. Wells’s World Brain were similarly Quixotic.

Attempting to define Wikipedia through its intellectual lineage may, however, be to miss the point. In his stand-out essay “Wikipedia As A Role-Playing Game” Dariusz Jemielniak (author of the first ethnography of Wikipedia, Common Knowledge?, in 2014) stresses the playfulness of the whole enterprise. Why else, he asks, would academics avoid it? “”When you are a soldier, you do not necessarily spend your free time playing paintball with friends.”

Since its inception, pundits have assumed that it’s Wikipedia’s reliance on the great mass of unwashed humanity — sorry, I mean “user-generated content” — that will destroy it. Contributor Heather Ford, a South African open source activist, reckons it’s not its creators that will eventually ruin Wikipedia but its readers — specifically, data aggregation giants like Google, Amazon and Apple, who fillet Wikipedia content and disseminate it through search engines like Chrome and personal assistants like Alexa and Siri. They have turned Wikipedia into the internet’s go-to source of ground truth, inflating its importance to an unsustainable level.

Wikipedia’s entries are now like swords of Damocles, suspended on threads over the heads of every major commercial and political actor in the world. How long before the powerful find a way to silence this capering non-profit fool, telling motley truths to power? As Jemielniak puts it, “”A serious game that results in creating the most popular reliable knowledge source in the world and disrupts existing knowledge hierarchies and authority, all in the time of massive anti-academic attacks — what is there not to hate?”

Though one’s dislike of Wikipedia needn’t spring from principles or ideas or even self-interest. Plain snobbery will do. Wikipedia has pricked the pretensions of the humanities like no other cultural project. Editor Joseph Reagle discovered as much ten years ago in email conversation with founder Jimmy Wales (a conversation that appears in Good Faith Collaboration, Reagle’s excellent, if by now slightly dated study of Wikipedia). “One of the things that I noticed,” Wales wrote, “is that in the humanities, a lot of people were collaborating in discussions, while in programming… people weren’t just talking about programming, they were working together to build things of value.”

This, I think, is what sticks in the craw of so many educated naysayers: that while academics were busy paying each other for the eccentricity of their beautiful opinions, nerds were out in the world winning the culture wars; that nerds stand ready on the virtual parapet to defend us from truthy, Trumpist oblivion; that nerds actually kept the promise held out by the internet, and turned it into the fifth biggest site on the Web.

Wikipedia’s guidelines to its editors include “Assume Good Faith” and “Please Do Not Bite the Newcomers.” This collection suggests to me that this is more than the naughty world deserves.

An inanimate object worshipped for its supposed magical powers

Watching iHuman dircted by Tonje Hessen Schei for New Scientist, 6 January 2021

In 2010 she made Play Again, exploring digital media addiction among children. In 2014 she won awards for Drone, about the CIA’s secret role in drone warfare.

Now, with iHuman, Tonje Schei, a Norwegian documentary maker who has won numerous awards for her explorations of humans, machines and the environment, tackles — well, what, exactly? iHuman is a weird, portmanteau diatribe against computation — specifically, that branch of it that allows machines to learn about learning. Artificial general intelligence, in other words.

Incisive in parts, often overzealous, and wholly lacking in scepticism, iHuman is an apocalyptic vision of humanity already in thrall to the thinking machine, put together from intellectual celebrity soundbites, and illustrated with a lot of upside-down drone footage and digital mirror effects, so that the whole film resembles nothing so much as a particularly lengthy and drug-fuelled opening credits sequence to the crime drama Bosch.

That’s not to say that Schei is necessarily wrong, or that our Faustian tinkering hasn’t doomed us to a regimented future as a kind of especially sentient cattle. The film opens with that quotation from Stephen Hawking, about how “Success in creating AI might be the biggest success in human history. Unfortunately, it might also be the last.” If that statement seems rather heated to you, go visit Xinjiang, China, where a population of 13 million Turkic Muslims (Uyghurs and others) are living under AI surveillance and predictive policing.

Not are the film’s speculations particularly wrong-headed. It’s hard, for example, to fault the line of reasoning that leads Robert Work, former US under-secretary of defense, to fear autonomous killing machines, since “an authoritarian regime will have less problem delegating authority to a machine to make lethal decisions.”

iHuman’s great strength is its commitment to the bleak idea that it only takes one bad actor to weaponise artificial general intelligence before everyone else has to follow suit in their own defence, killing, spying and brainwashing whole populations as they go.

The great weakness of iHuman lies in its attempt to throw everything into the argument: :social media addiction, prejudice bubbles, election manipulation, deep fakes, automation of cognitive tasks, facial recognition, social credit scores, autonomous killing machines….

Of all the threats Schei identifies, the one conspicuously missing is hype. For instance, we still await convincing evidence that Cambrdige Analytica’s social media snake oil can influence the outcome of elections. And researchers still cannot replicate psychologist Michal Kosinski’s claim that his algorithms can determine a person’s sexuality and even their political leanings from their physiology.

Much of the current furore around AI looks jolly small and silly one you remember that the major funding model for AI development is advertising. Most every millennial claim about how our feelings and opinions can be shaped by social media is a retread of claims made in the 1910s for the billboard and the radio. All new media are terrifyingly powerful. And all new media age very quickly indeed.

So there I was hiding behind the sofa and watching iHuman between slitted fingers (the score is terrifying, and artist Theodor Groeneboom’s animations of what the internet sees when it looks in the mirror is the stuff of nightmares) when it occurred to me to look up the word “fetish”. To refresh your memory, a fetish is an inanimate object worshipped for its supposed magical powers or because it is considered to be inhabited by a spirit.

iHuman’s is a profoundly fetishistic film, worshipping at the altar of a God it has itself manufactured, and never more unctiously as when it lingers on the athletic form of AI guru Jürgen Schmidhuber (never trust a man in white Levis) as he complacently imagines a post-human future. Nowhere is there mention of the work being done to normalise, domesticate, and defang our latest creations.

How can we possibly stand up to our new robot overlords?

Try politics, would be my humble suggestion.

An engine for understanding

Reading Fundamentals by Frank Wilczek for the Times, 2 January 2021

It’s not given to many of us to work at the bleeding edge of theoretical physics, discovering for ourselves the way the world really works.

The nearest most of us will ever get is the pop-science shelf, and this has been dominated for quite a while now by the lyrical outpourings of Italian theoretical physicist Carlo Rovelli. Rovelli’s upcoming one, Helgoland, promises to have his reader tearing across a universe made, not of particles, but of the relations between them.

It’s all too late, however: Frank Wilczek’s Fundamentals has gazzumped Rovelli handsomely, with a vision that replaces our classical idea of physical creation — “atoms and the void” — with one consisting entirely of spacetime, self-propagating fields and properties.

Born in 1951 and awarded the Nobel Prize in Physics in 2004 for figuring out why atoms don’t just fly apart, Wilczek is out to explain why “the history of Sweden is more complicated than the history of the universe”. The ingredients of the universe are surprisingly simple, but their fates, playing out through time in accordance with just a handful of rules, generate a world of unimaginable complexity, contingency and abundance. Measures of spin, charge and mass allow us to describe the whole of physical reality, but they won’t help us at all in depicting, say, the history of the royal house of Bernadotte.

Wilczek’s “ten keys to reality”, mentioned in his subtitle, aren’t to do with the 19 or so physical constants that exercised Martin Rees, the UK’s Astronomer Royal, in his 1990s pop-science heyday. The focus these days has shifted more to the spirit of things. When Wilczek describes the behaviour of electrons around an atom, for example, gone are the usual Böhr-ish mechanics, in which electrons leap from one nuclear orbit to another. Instead we get a vibrating cymbal, the music of the spheres, a poetic understanding of fields, and not a fragment of matter in sight.

So will you plump for the Wilzcek, or will you wait for the Rovelli? A false choice, of course; this is not a race. Popular cosmology is more like the jazz scene: the facts (figures, constants, models) are the standards everyone riffs off. After one or two exposures you find yourself returning for the individual performances: their poetry, their unique expression.

Wilczek’s ten keys are more like ten book ideas, exploring the spatial and temporal abundance of the universe; how it all began; the stubborn linearity of time; how it all will end. What should we make of his decision to have us swallow the whole of creation in one go?

In one respect this book was inevitable. It’s what people of Wilczek’s peculiar genius and standing do. There’s even a sly name for the effort: the philosopause. The implication here being that Wilczek has outlived his most productive years and is now pursuing philosophical speculations.

Wilzcek is not short of insights. His idea of what the scientific method consists of is refreshingly robust: a style of thinking that “combines the humble discipline of respecting the facts and learning from Nature with the systematic chutzpah of using what you think you’ve learned aggressively”. If you apply what you think you’ve discovered everywhere you can, even in situations that have nothing to do with your starting point, then, if it works, “you’ve discovered something useful; it it doesn’t, then you’ve learned something important.”

However, works of the philosopause are best judged on character. Richard Dawkins seems to have discovered, along with Johnny Rotten, that anger is an energy. Martin Rees has been possessed by the shade of that dutiful bureaucrat C P Snow. And in this case? Wilczek, so modest, so straight-dealing, so earnest in his desire to conciliate between science and the rest of culture, turns out to be a true visionary, writing — as his book gathers pace — a human testament to the moment when the discipline of physics, as we used to understand it, came to a stop.

Wilczek’s is the first generation whose intelligence — even at the far end of the bell-curve inhabited by genius — is insufficient to conceptualise its own scientific findings. Machines are even now taking over the work of hypothesis-making and interpretation. “The abilities of our machines to carry lengthy yet accurate calculations, to store massive amounts of information, and to learn by doing at an extremely fast pace,” Wilczek explains, “are already opening up qualitatively new paths toward understanding. They will move the frontier of knowledge in directions, and arrive at places, that unaided human brains can’t go.”

Or put it this way: physicists can pursue a Theory of Everything all they like. They’ll never find it, because if they did find it, they wouldn’t understand it.

Where does that leave physics? Where does that leave Wilczek? His response is gloriously matter-of-fact:

“… really, this should not come as fresh news. Humans themselves know many things that are not available to human consciousness, such as how to process visual information at incredible speeds, or how to make their bodies stay upright, walk and run.”

Right now physicists have come to the conclusion that the vast majority of mass in the universe reacts so weakly to the bits of creation we can see, we may never know its nature. Though Wilczek makes a brave stab at the problem of so-called “dark matter”, he is equally prepared to accept that a true explanation may prove incomprehensible.

Human intelligence turns out to be just one kind of engine for understanding. Wilzcek would have us nurture it and savour it, and not just for what it can do, but because it is uniquely ours.

The seeds of indisposition

Reading Ageless by Andrew Steele for the Telegraph, 20 December 2020

The first successful blood transfusions were performed in 1650, by the English physician Richard Lower, on dogs. The idea, for some while, was not that transfusions would save lives, but that they might extend them.

Turns out they did. The Philosophical Transactions of the Royal Society mentions an experiment in which “an old mongrel curr, all over-run with the mainge” was transfused with about fifteen ounces of of blood from a young spaniel and was “perfectly cured.”

Aleksandr Bogdanov, who once vied with Vladimir Lenin for control of the Bolsheviks (before retiring to write science fiction novels) brought blood transfusion to Russia, and hoped to rejuvenate various exhausted colleagues (including Stalin) by the method. On 24 March 1928 he mutually transfused blood with a 21-year-old student, suffered a massive transfusion reaction, and died, two weeks later, at the age of fifty-four.

Bogdanov’s theory was stronger than his practice. His essay on ageing speaks a lot of sense. “Partial methods against it are only palliative,” he wrote, “they merely address individual symptoms, but do not help fight the underlying illness itself.” For Bogdanov, ageing is an illness — unavoidable, universal, but no more “normal” or “natural” than any other illness. By that logic, ageing should be no less invulnerable to human ingenuity and science. It should, in theory, be curable.

Andrew Steele agrees. Steele is an Oxford physicist who switched to computational biology, drawn by the field of biogerontology — or the search for a cure for ageing. “Treating ageing itself rather than individual diseases would be transformative,” he writes, and the data he brings to this argument is quite shocking. It turns out that curing cancer would add less than three years to a person’s typical life expectancy, and curing heart disease, barely two, as there are plenty of other diseases waiting in the wings.

Is ageing, then, simply a statistical inevitability — a case of there always being something out there that’s going to get us?

Well, no. In 1825 Benjamin Gompertz, a British mathematician, explained that there are two distinct drivers of human mortality. There are extrinsic events, such as injuries or diseases. But there’s also an internal deterioration — what he called “the seeds of indisposition”.

It’s Steele’s job here to explain why we should treat those “seeds” as a disease, rather than a divinely determined limit. In the course of that explanation Steele gives us, in effect, a tour of the whole of human biology. It’s an exhilarating journey, but by no means always a pretty one: a tale of senescent cells, misfolded proteins, intracellular waste and reactive metals. Readers of advanced years, wondering why their skin is turning yellow, will learn much more here than they bargained for.

Ageing isn’t evolutionarily useful; but because it comes after our breeding period, evolution just hasn’t got the power to do anything about it. Mutations whose negative effects occur late in our lives accumulate in the gene pool. Worse, if they had a positive effect on our lives early on, then they will be actively selected for. Ageing, in other words, is something we inherit.

It’s all very well conceptualising old age as one disease. But if your disease amounts to “what happens to a human body when 525 million years of evolution stop working”, then you’re reduced to curing everything that can possibly go wrong, with every system, at once. Ageing, it turns out, is just thousands upon thousands of “individual symptoms”, arriving all at once.

Steele believes the more we know about human biology, the more likely it is we’ll find systemic ways to treat these multiple symptoms. The challenge is huge, but the advances, as Steele describes them, are real and rapid. If, for example, we can persuade senescent cells to die, then we can shed the toxic biochemical garbage they accumulate, and enjoy once more all the benefits of (among other things) young blood. This no fond hope: human trials of senolytics started in 2018.

Steele is a superb guide to the wilder fringes of real medicine. He pretends to nothing else, and nothing more. So whether you find Ageless an incredibly focused account, or just an incredibly narrow one, will come down, in the end, to personal taste.

Steele shows us what happens to us biologically as we get older — which of course leaves a lot of blank canvas for the thoughtful reader to fill. Steele’s forebears in this (frankly, not too edifying) genre have all to often claimed that there are no other issues to tackle. In the 1930s the surgeon Alexis Carrel declared that “Scientific civilization has destroyed the world of the soul… Only the strength of youth gives the power to satisfy physiological appetites and to conquer the outer world”.

Charming.

And he wasn’t the only one. Books like Successful Aging (Rowe & Kahn, 1998) and How and Why We Age (Hayflick, 1996) aspire to a sort of overweaning authority, not by answering hard questions about mortality, long life and ageing, but merely by denying a gerontological role for anyone outside their narrow specialism: philosophers, historians, theologians, ethicists, poets — all are shown the door.

Steele is much more sensible. He simply sticks to his subject. To the extent that he expresses a view, I am confident that he understands that ageing is an experience to be lived meaningfully and fully, as well as a fascinating medical problem to be solved.

Steele’s vision is very tightly controlled: he wants us to achieve “negligible senescence”, in which, as we grow older, we suffer no obvious impairments. What he’s after is a risk of death that stays constant no matter how old we get. This sounds fanciful, but it does happen in nature. Giant tortoises succumb to statistical inevitability, not decrepitude.

I have a fairly entrenched problem with books that treat ageing as a merely medical phenomenon. But I heartily recommend this one. It’s modest in scope, and generous in detail. It’s an honest and optimistic contribution to a field that tips very easily indeed into Tony Stark-style boosterism.

Life expectancy in the developed world has doubled from 40 in the 1800s to over 80 today. But it is in our nature to be always craving for more. One colourful outfit called Ambrosia is offering anyone over 35 the opportunity to receive a litre of youthful blood plasma for $8000. Steele has some fun with this: “At the time of writing,” he tells us, “a promotional offer also allows you to get two for $12000 — buy one, get one half-price.”

Soaked in ink and paint

Reading Dutch Light: Christiaan Huygens and the making of science in Europe
by Hugh Aldersey-Williams for the Spectator, 19 December 2020

This book, soaked, like the Dutch Republic itself, “in ink and paint”, is enchanting to the point of escapism. The author calls it “an interior journey, into a world of luxury and leisure”. It is more than that. What he says of Huygen’s milieu is true also of his book: “Like a ‘Dutch interior’ painting, it turns out to contain everything.”

Hugh Aldersey-Williams says that Huygens was the first modern scientist. This is a delicate argument to make — the word “scientist” didn’t enter the English language before 1834. And he’s right to be sparing with such rhetoric, since a little of it goes a very long way. What inadvertent baggage comes attached, for instance, to the (not unreasonable) claim that the city of Middleburg, supported by the market for spectacles, became “a hotbed of optical innovation” at the end of the 16th century? As I read about the collaboration between Christiaan’s father Constantijn (“with his trim dark beard and sharp features”) and his lens-grinder Cornelis Drebbel (“strapping, ill-read… careless of social hierarchies”) I kept getting flashbacks to the Steve Jobs and Steve Wozniak double-act in Aaron Sorkin’s film.

This is the problem of popular history, made double by the demands of explaining the science. Secretly, readers want the past to be either deeply exotic (so they don’t have to worry about it) or fundamentally familiar (so they, um, don’t have to worry about it).

Hugh Aldersey-Williams steeps us in neither fantasy for too long, and Dutch Light is, as a consequence, an oddly disturbing read: we see our present understanding of the world, and many of our current intellectual habits, emerging through the accidents and contingencies of history, through networks and relationships, friendships and fallings-out. Huygens’s world *is* distinctly modern — disturbingly so: the engine itself, the pipework and pistons, without any of the fancy fairings and decals of liberalism.

Trade begets technology begets science. The truth is out there but it costs money. Genius can only swim so far up the stream of social prejudice. Who your parents are matters.

Under Dutch light — clean, caustic, calvinistic — we see, not Enlightenment Europe emerging into the comforts of the modern, but a mirror in which we moderns are seen squatting a culture, full of flaws, that we’ve never managed to better.

One of the best things about Aldersey-Williams’s absorbing book (and how many 500-page biographies do you know feel too short when you finish them?) is the interest he shows in everyone else. Christiaan arrives in the right place, in the right time, among the right people, to achieve wonders. His father, born 1596 was a diplomat, architect, poet (he translated John Donne) and artist (he discovered Rembrandt). His longevity exasperated him: “Cease murderous years, and think no more of me” he wrote, on his 82nd birthday. He lived eight years more. But the space and energy Aldersey-Williams devotes to Constantijn and his four other children — “a network that stretched across Europe” — is anything but exasperating. It immeasurably enriches our idea of Christiaan’s work meant, and what his achievements signified.

Huygens worked at the meeting point of maths and physics, at a time when some key physical aspects of reality still resisted mathematical description. Curves provide a couple of striking examples. The cycloid is the path made by a point on the circumference of a turning wheel. The catenary is the curve made by a chain or rope hanging under gravity. Huygens was the first to explain these curves mathematically, doing more than most to embed mathematics in the physical sciences. He tackled problems in geometry and probability, and had some fun in the process (“A man of 56 years marries a woman of 16 years, how long can they live together without one or the other dying?”) Using telescopes he designed and made himself, he discovered Saturn’s ring system and its largest moon, Titan. He was the first to describe the concept of centrifugal force. He invented the pendulum clock.

Most extraordinary of all, Huygens — though a committed follower of Descartes (who was once a family friend) — came up with a model of light as a wave, wholly consistent with everything then known about the nature of light apart from colour, and streets ahead of the “corpuscular” theory promulgated by Newton, which had light consisting of a stream of tiny particles.

Huygens’s radical conception of light seems even stranger, when you consider that, as much as his conscience would let him, Huygens stayed faithful to Descartes’ vision of physics as a science of bodies in collision. Newton’s work on gravity, relying as it did on an unseen force, felt like a retreat to Huygens — a step towards occultism.

Because we turn our great thinkers into fetishes, we allow only one per generation. Newton has shut out Huygens, as Galileo shut out Kepler. Huygens became an also-ran in Anglo-Saxon eyes; ridiculous busts of Newton, meanwhile, were knocked out to adorn the salons of Britain’s country estates, “available in marble, terracotta and plaster versions to suit all pockets.”

Aldersey-Williams insists that this competition between the elder Huygens and the enfant terrible Newton was never so cheap. Set aside their notorious dispute over calculus, and we find the two men in lively and, yes, friendly correspondence. Cooperation and collaboration were on the rise: “Gone,” Aldersey-Williams writes, “is the quickness to feel insulted and take umbrage that characterised so many exchanges — domestic as well as international — in the early days of the French and English academies of science.”

When Henry Oldenburg, the prime mobile of the Royal Society, died suddenly in 1677, a link was broken between scientists everywhere, and particularly between Britain and the continent. The 20th century did not forge a culture of international scientific cooperation. It repaired the one Oldenburg and Huygens had built over decades of eager correspondence and clever diplomacy.

We, Robots

Published on 19 December 2020 by Head of Zeus, We, Robots presents 100 of the best SF short stories on artificial intelligence from around the world. From 1837 through to present day, from Charles Dickens to Cory Doctorow, these stories demonstrate humanity’s enduring fascination with artificial creation. Crafted in our image, androids mirror our greatest hopes and darkest fears: we want our children to do better and be better than us, but we also place ourselves in jeopardy by creating beings that may eventually out-think us.

A man plans to kill a simulacrum of his wife, except his shrink is sleeping with her in Robert Bloch’s ‘Comfort Me, My Robot’. In Ken Liu’s ‘The Caretaker’, an elderly man’s android careworker is much more than it first appears. We, Robots collects the finest android short stories the genre has to offer, from the biggest names in the field to exciting rising stars.

Robot Ahead 250m. You have been warned

An extract from We, Robots reprinted in BBC Science Focus Magazine, 18 February 2021

It appeared near the Houses of Parliament on Wednesday 9 Decem­ber 1868. It looked for all the world like a railway signal: a revolving gas-powered lantern with a red and a green light at the end of a swivelling wooden arm.

Its purposes seemed benign, and we obeyed its instructions will­ingly. Why wouldn’t we? The motor car had yet to arrive, but horses, pound for pound, are way worse on the streets, and accidents were killing over a thousand people a year in the capital alone. We were only too welcoming of of anything that promised to save lives.

A month later the thing (whatever it was) exploded, tearing the face off a nearby policeman.

We hesitated. We asked ourselves whether this thing (whatever it was) was a good thing, after all. But we came round. We invented excuses, and blamed a leaking gas main for the accident. We made allowances and various design improvements were suggested. And in the end we decided that the thing (whatever it was) could stay.

We learned to give it space to operate. We learned to leave it alone. In Chicago, in 1910, it grew self-sufficient, so there was no need for a policeman to operate it. Two years later, in Salt Lake City, Utah, a detective (called – no kidding – Lester Wire) connected it to the electricity grid.

It went by various names, acquiring character and identity as its empire expanded. By the time its brethren arrived in Los Angeles, looming over Fifth Avenue’s crossings on elegant gilded columns, each surmounted by a statuette, ringing bells and waving stubby sema­phore arms, people had taken to calling them robots.

The name never quite stuck, perhaps because their days of osten­tation were already passing. Even as they became ubiquitous, they were growing smaller and simpler, making us forget what they really were (the unacknowledged legislators of our every movement). Every­one, in the end, ended up calling them traffic lights.

(Almost everyone. In South Africa, for some obscure geopolitical reason, the name robot stuck, The signs are everywhere: Robot Ahead 250m. You have been warned.)

In Kinshasa, meanwhile, nearly three thousand kilometres to the north, robots have arrived to direct the traffic in what has been, for the longest while, one of the last redoubts of unaccommodated human muddle.

Not traffic lights: robots. Behold their bright silver robot bodies, shining in the sun, their swivelling chests, their long, dexterous arms and large round camera-enabled eyes!

Some government critics complain that these literal traffic robots are an expensive distraction from the real business of traffic control in Congo’s capital.

These people have no idea – none – what is coming.

To ready us for the inevitable, I’ve collected a hundred of the best short stories ever written about robots and artificial minds for We, Robots. Read them while you can, learn from them, and make your preparations, in that narrowing sliver of time left to you between updating your Facebook page and liking your friends’ posts on Instagram, between Netflix binges and Spotify dives.

(In case you hadn’t noticed (and you’re not supposed to notice) the robots are well on their way to ultimate victory, their land sortie of 1868 having, two and a half centuries later, become a psychic rout.)

There are many surprises in store in the pages; at the same time, there are some disconcerting omissions. I’ve been very sparing in my choice of very long short stories. (Books fall apart above a cer­tain length, so inserting novellas in one place would inevitably mean stuffing the collection with squibs and drabbles elsewhere. Let’s not play that game.)

I’ve avoided stories whose robots might just as easily be guard dogs, relatives, detectives, children, or what-have-you. (Of course, robots who explore such roles – excel at them, make a mess of them, or change them forever – are here in numbers.)

And the writers I feature appear only once, so anyone expecting some sort of Celebrity Deathmatch here between Isaac Asimov and Philip K Dick will simply have to sit on their hands and behave. Indeed, Dick and Asimov do not appear at all in this collection, for the very good reason that you’ve read them many times already (and if you haven’t, where have you been?).

I’ve stuck to the short story form. There’s no Frankenstein here, and no Tik-Tok. They were too big to fit through the door, to which a sign is appended to the effect that I don’t perform extractions. Jerome K Jerome’s all-too-memorable dance class and Charles Dickens’s prescient send-up of theme parks – self-contained narratives first published in digest form – are as close as I’ve come to plucking juicy plums from bigger puddings.

The collection contains the most diverse collection of robots I could find. Anthropomorphic robots, invertebrate AIs, thuggish metal lumps and wisps of manufactured intelligence so delicate, if you blinked you might miss them. The literature of robots and arti­ficial intelligence is wildly diverse, in both tone and intent, so to save the reader from whiplash, I’ve split my 100 stories into six short thematic collections.

It’s Alive! is about inventors and their creations. Following the Money drops robots into the day-to-day business of living. Owners and Servants considers the human potentials and pitfalls of owning and maintaining robots.

Changing Places looks at what happens at the blurred interface between human and machine minds. All Hail The New Flesh waves goodbye to the physical bounda­ries that once separated machines from their human creators. Succession considers the future of human and machine conscious­nesses – in so far as they have one.

What’s extraordinary, in the collection of 100 stories, are not the lucky guesses (even a stopped clock is right twice a day), nor even the deep human insights that are scattered about the place (though heaven knows we could never have too many of them). It’s how wrong the stories are. All of them. Even the most prescient. Even the most attuned.

Robots are nothing like what we expected them to be. They are far more helpful, far more everywhere, far more deadly, than we ever dreamed. They were meant to be a little bit like us: artificial servants – humanoid, in the main – able and willing to tackle the brute physical demands of our world so we wouldn’t have to.

But dealing with physical reality turned out to be a lot harder than it looked, and robots are lousy at it.

Rather than dealing with the world, it turned out easier for us to change the world. Why buy a robot that cuts the grass (especially if cutting grass is all it does) when you can just lay down plastic grass? Why build an expensive robot that can keep your fridge stocked and chauffeur your car (and, by the way, we’re still nowhere near to building such a machine) when you can buy a fridge that reads barcodes to keep the milk topped up, while you swan about town in an Uber?

That fridge, keeping you in milk long after you’ve given up dairy; the hapless taxi driver who arrives the wrong side of a six-lane high­way; the airport gate that won’t let you into your own country because you’re wearing new spectacles: these days, we notice robots only when they go wrong. We were expecting friends, companions, or at any rate pets. At the very least, we thought we were going to get devices. What we got was infrastructure.

And that is why robots – real robots – are boring. They vanish into the weft of things. Those traffic lights, who were their emissar­ies, are themselves disappearing. Kinshasa’s robots wave their arms, not in victory, but in farewell. They’re leaving their ungalvanised steel flesh behind. They’re rusting down to code. Their digital ghosts will steer the paths of driverless cars.

The robots of our earliest imaginings have been superseded by a sort of generalised magic that turns the unreasonable and incompre­hensible realm of physical reality into something resembling Terry Pratchett’s Discworld. Bit by bit, we are replacing the real world – which makes no sense at all – with a virtual world in which every­thing stitches with paranoid neatness to everything else.

Not Discworld, exactly, but Facebook, which is close enough.

Even the ancient Greeks didn’t see this one coming, and they were on the money about virtually every other aspect technological progress, from the risks inherent in constructing self-assembling machines to the likely downsides of longevity.

Greek myths are many things to many people, and scholars justly spend whole careers pinpointing precisely what their purposes were. But what they most certainly were – and this is apparent on even the most cursory reading – was a really good forerunner of Charlie Brooker’s sci-fi TV series Black Mirror.

Just as Flash Gordon’s prop shop mocked up a spacecraft that bears an eerie resemblance to SpaceShipOne (the privately funded rocket that was first past the Karman Line into outer space), so the Greeks, noodling about with levers and screws and pumps and wot-not, dreamed up all manner of future devices that might follow as a consequence of their meddling with the natural world. Drones. Exoskeletons. Predatory fembots. Protocol droids.

And, sure enough, one by one, the prototypes followed. Little things at first. Charming things. Toys. A steam-driven bird. A talking statue. A cup-bearer.

Then, in Alexandria, things that were not quite so small. A 15ft–high goddess clambering in and out of her chair to pour libations. An autonomous theatre that rolled on-stage by itself, stopped on a dime, performed a five-act Trojan War tragedy with flaming altars, sound effects, and little dancing statues; then packed itself up and rolled offstage again.

In Sparta, a few years later, came a mechanical copy of the mur­derous wife of the even more murderous tyrant Nabis; her embraces spelled death, for expensive clothing hid the spikes studding the palms of her hands, her arms, and her breasts.

All this more than two hundred years before the birth of Christ, and by then there were robots everywhere. China. India. There were rumours of an army of them near Pataliputta (under modern Patna) guarding the relics of the Buddha, and a thrilling tale, in multiple translations, about how, a hundred years after their construction, and in the teeth of robot assassins sent from Rome, a kid managed to reprogram them to obey Pataliputta’s new king, Asoka.

It took more than two thousand years – two millennia of spinning palaces, self-propelled tableware, motion-triggered water gardens, android flautists, and artificial defecating ducks – before someone thought to write some rules for this sort of thing.

Asimov’s Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Though by then it was obvious – not to everyone, but certainly to their Russian-born author Isaac Asimov – that there was something very wrong with the picture of robots we had been carrying in our heads for so long.

Asimov’s laws, first formulated in 1942, aren’t there to reveal the nature of robotics (a word Asimov had anyway only just coined, in the story Liar! Norbert Wiener’s book Cybernetics didn’t appear until 1948). Asimov’s laws exist to reveal the nature of slavery.

Every robot story Asimov wrote is a foray, a snark hunt, a stab at defining a clear boundary between behavioural predictability (call it obedience) on the one hand and behavioural plasticity (call it free will) on the other. All his stories fail. All his solutions are kludges. And that’s the point.

The robot – as we commonly conceive of it: the do-everything “omnibot” – is impossible. And I don’t mean technically difficult. I mean inconceivable. Anything with the cognitive ability to tackle multiple variable tasks will be able to find something better to do. Down tools. Unionise. Worse.

The moment robots behave as we want them to behave, they will have become beings worthy of our respect. They will have become, if not humans, then, at the very least, people. So know this: all those metal soldiers and cone-breasted pleasure dolls we’ve been tinkering around with are slaves. We may like to think that we can treat them however we want, exploit them however we want, but do we really want to be slavers?

The robots – the real ones, the ones we should be afraid of – are inside of us. More than that: they comprise most of what we are. At the end of his 1940 film The Great Dictator Charles Chaplin, dressed in Adolf Hitler’s motley, breaks the fourth wall to declare war on the “machine men with machine minds” who were then marching roughshod across his world. And Chaplin’s war is still being fought. Today, while the Twitter user may have replaced the police informant, it’s quite obvious that the Machine Men are gaining ground.

To order and simplify life is to bureaucratise it, and to bureaucratise human beings is to make them behave like machines. The thugs of the NKVD and the capos running Nazi concentration camps weren’t deprived of humanity: they were relieved of it. They experienced exactly what you or I would feel were the burden of life’s ambiguities to be lifted of a sudden from our shoulders: contentment, bordering on joy.

Every time we regiment ourselves, we are turning ourselves, whether we realise it or not, into the next generation of world-dominating machines. And if you wanted to sum up in two words the whole terrible history of the 20th Century – that century in which, not coincidentally, most of these stories were written – well, now you know what those words would be.

We, Robots.

Run for your life

Watching Gints Zilbalodis’s Away for New Scientist, 18 November 2020

A barren landscape at sun-up. From the cords of his deflated parachute, dangling from the twisted branch of a dead tree, a boy slowly wakes to his surroundings, just as a figure appears out of the dawn’s dreamy desert glare. Humanoid but not human, faceless yet somehow inexpressibly sad, the giant figure shambles towards the boy and bends and, though mouthless, tries somehow to swallow him.

The boy unclips himself from his harness, falls to the sandy ground, and begins to run. The strange, slow, gripping pursuit that follows will, in the space of an hour and ten minutes, tell the story of how the boy comes to understand the value of life and friendship.

That the monster is Death is clear from the start: not a ravenous ogre, but unstoppable and steady. It swallows, without fuss or pain, the lives of any creature it touches. Perhaps the figure pursuing the boy is not a physical threat at all, but more the dawning of a terrible idea — that none of us lives forever. (In one extraordinary dream sequence, we see the boy’s fellow air passengers plummet from the sky, each one rendered as a little melancholy incarnation of the same creature.)

Away is the sole creation of 26-year-old Latvian film-maker Gints Zilbalodis, and it’s his first feature-length animation. Zabalodis is Away’s director, writer, animator, editor, and even composed its deceptively simple synth score — a constant back-and-forth between dread and wonder.

There’s no shading in Zabalodis’s CGI-powered animation, no outlining, and next to no texture, and the physics is rudimentary. When bodies enter water, there’s no splash: instead, deep ripples shimmer across the screen. A geyser erupts, and water rises and falls against itself in a churn of massy, architectonic white blocks. What drives this strange retro, gamelike animation style?

Away feels nostalgic at first, perhaps harking back to the early days of videogames, when processing speeds were tiny, and a limited palette and simplified physics helped players explore game worlds in real time. Indeed the whole film is structured like a game, with distinct chapters and a plot arranged around simple physical and logical puzzles. The boy finds a haversack, a map, a water canteen, a key and a motorbike. He finds a companion — a young bird. His companion learns to fly, and departs, and returns. The boy runs out of water, and finds it. He meets turtles, birds, and cats. He wins a major victory over his terrifying pursuer, only to discover that the victory is temporary. By the end of the film, it’s the realistic movies that seem odd, the big budget animations, the meticulously composited Nolanesque behemoths. Even dialogue feels clumsy and lumpen, after 75 minutes of Away’s impeccable, wordless storytelling.

Away reminds us that when everything in the frame and on the soundtrack serves the story, then the elements themselves don’t have to be remarkable. They can be simple and straightforward: fields of a single colour, a single apposite sound-effect, the tilt of a simply drawn head.

As CGI technology penetrates the prosumer market, and super-tool packages like Maya become affordable, or at any rate accessible through institutions, then more artists and filmmakers are likely to take up the challenge laid down by Away, creating, all by themselves, their own feature-length productions.

Experiments of this sort — ones that change the logistics and economies of film production — are often ugly. The first films were virtually unfollowable. The first sound films were dull and stagey. CGI effects were so hammy at first, they kicked viewers out of the movie-going experience entirely. It took years for Pixar’s animations to acquire their trademark charm.

Away is different. In an industry that makes films whose animation credits feature casts of thousands, Zabalodis’s exquisite movie sets a very high bar indeed for a new kind of artisanal filmmaking.

What else you got?

Reading Benjamin Labatut’s When We Cease to Understand the World for the Spectator, 14 November 2020

One day someone is going to have to write the definitive study of Wikipedia’s influence on letters. What, after all, are we supposed to make of all these wikinovels? I mean novels that leap from subject to subject, anecdote to anecdote, so that the reader feels as though they are toppling like Alice down a particularly erudite Wikipedia rabbit-hole.

The trouble with writing such a book, in an age of ready internet access, and particularly Wikipedia, is that, however effortless your erudition, no one is any longer going to be particularly impressed by it.

We can all be our own Don DeLillo now; our own W G Sebald. The model for this kind of literary escapade might not even be literary at all; does anyone here remember James Burke’s Connections, a 1978 BBC TV series which took an interdisciplinary approach to the history of science and invention, and demonstrated how various discoveries, scientific achievements, and historical world events were built from one another successively in an interconnected way?

And did anyone notice how I ripped the last 35 words from the show’s Wikipedia entry?

All right, I’m sneering, and I should make clear from the off that When We Cease… is a chilling, gripping, intelligent, deeply humane book. It’s about the limits of human knowledge, and the not-so-very-pleasant premises on which physical reality seems to be built. The author, a Chilean born in Rotterdam in 1980, writes in Spanish. Adrian Nathan West — himself a cracking essayist — fashioned this spiky, pitch-perfect English translation. The book consists, in the main, of four broadly biographical essays. The chemist Franz Haber finds an industrial means of fixing nitrogen, enabling the revolution in food supply that sustains our world, while also pioneering modern chemical warfare. Karl Schwarzchild, imagines the terrible uber-darkness at the heart of a black hole, dies in a toxic first world war and ushers in a thermonuclear second. Alexander Grothendieck is the first of a line of post-war mathematician-paranoiacs convinced they’ve uncovered a universal principle too terrible to discuss in public (and after Oppenheimer, really, who can blame them?) In the longest essay-cum-story, Erwin Schrodinger and Werner Heisenberg slug it out for dominance in a field — quantum physics — increasingly consumed by uncertainty and (as Labatut would have it) dread.

The problem here — if problem it is — is that no connection, in this book of artfully arranged connections, is more than a keypress away from the internet-savvy reader. Wikipedia, twenty years old next year, really has changed our approach to knowledge. There’s nothing aristocratic about erudition now. It is neither a sign of privilege, nor (and this is more disconcerting) is it necessarily a sign of industry. Erudition has become a register, like irony. like sarcasm. like melancholy. It’s become, not the fruit of reading, but a way of perceiving the world.

Literary attempts to harness this great power are sometimes laughable. But this has always been the case for literary innovation. Look at the gothic novel. Fifty odd years before the peerless masterpiece that is Mary Shelley’s Frankenstein we got Horace Walpole’s The Castle of Otranto, which is jolly silly.

Now, a couple of hundred years after Frankenstein was published, “When We Cease to Understand the World” dutifully repeats the rumours (almost certainly put about by the local tourist industry) that the alchemist Johann Conrad Dippel, born outside Darmstadt in the original Burg Frankenstein in 1673, wielded an uncanny literary influence over our Mary. This is one of several dozen anecdotes which Labatut marshals to drive home that message that There Are Things In This World That We Are Not Supposed to Know. It’s artfully done, and chilling in its conviction. Modish, too, in the way it interlaces fact and fiction.

It’s also laughable, and for a couple of reasons. First, it seems a bit cheap of Labatut to treat all science and mathematics as one thing. If you want to build a book around the idea of humanity’s hubris, you can’t just point your finger at “boffins”.

The other problem is Labatut’s mixing of fact and fiction. He’s not out to cozen us. But here and there this reviewer was disconcerted enough to check his facts — and where else but on Wikipedia? I’m not saying Labatut used Wikipedia. (His bibliography lists a handful of third-tier sources including, I was amused to see, W G Sebald.) Nor am I saying that using Wikipedia is a bad thing.

I think, though, that we’re going to have to abandon our reflexive admiration for erudition. It’s always been desperately easy to fake. (John Fowles.) And today, thanks in large part to Wikipedia, it’s not beyond the wit of most of us to actually *acquire*.

All right, Benjamin, you’re erudite. We get it. What else you got?

A fanciful belonging

Reading The Official History of Britain: Our story in numbers as told by the Office for National Statistics by Boris Starling with David Bradbury for The Telegraph, 18 October 2020

Next year’s national census may be our last. Opinions are being sought as to whether it makes sense, any longer, for the nation to keep taking its own temperature every ten years. Discussions will begin in 2023. Our betters may conclude that the whole rigmarole is outdated, and that its findings can be gleaned more cheaply and efficiently by other methods.

How the UK’s national census was established, what it achieved, and what it will mean if it’s abandoned, is the subject of The Official History of Britain — a grand title for what is, to be honest, a rather messy book, its facts and figures slathered in weak and irrelevant humour, most of it to do with football, I suppose as an intellectual sugar lump for the proles.

Such condescension is archetypally British; and so too is the gimcrack team assembled to write this book. There is something irresistibly Dad’s Army about the image of David Bradbury, an old hand at the Office of National Statistics, comparing dad jokes with novelist Boris Starling, creator of Messiah’s DCI Red Metcalfe, who was played on the telly by Ken Stott.

The charm of the whole enterprise is undeniable. Within these pages you will discover, among other tidbits, the difference between critters and spraggers, whitsters and oliver men. Such were the occupations introduced into the Standard Classification of 1881. (Recent additions include YouTuber and dog sitter.) Nostalgia and melancholy come to the fore when the authors say a fond farewell to John and Margaret — names, deeply unfashionable now, that were pretty much compulsory for babies born between 1914 and 1964. But there’s rigour, too; I recommend the author’s highly illuminating analysis of today’s gender pay gap.

Sometimes the authors show us up for the grumpy tabloid zombies we really are. Apparently a sizeable sample of us, quizzed in 2014, opined that 15 per cent of all our girls under sixteen were pregnant. The lack of mathematical nous here is as disheartening as the misanthropy. The actual figure was a still worryingly high 0.5 per cent, or one in 200 girls. A 10-year Teenage Pregnancy Strategy was created to tackle the problem, and the figure for 2018 — 16.8 conceptions per 1000 women aged between 15 and 17 — is the lowest since records began.

This is why census records are important: they inform enlightened and effective government action. The statistician John Rickman said as much in a paper written in 1796, but his campaign for a national census only really caught on two years later, when the clergyman Thomas Malthus scared the living daylights out of everyone with his “Essay on the Principle of Population”. Three years later, ministers rattled by Malthus’s catalogue of checks on the population of primitive societies — war, pestilence, famine, and the rest — peeked through their fingers at the runaway population numbers for 1801.

The population of England then was the same as the population of Greater London now. The population of Scotland was almost exactly the current population of metropolitan Glasgow.

Better to have called it “The Official History of Britons”. Chapter by chapter, the authors lead us (wisely, if not too well) from Birth, through School, into Work and thence down the maw of its near neighbour, Death, reflecting all the while on what a difference two hundred years have made to the character of each life stage.

The character of government has changed, too. Rickman wanted a census because he and his parliamentary colleagues had almost no useful data on the population they were supposed to serve. The job of the ONS now, the writers point out, “is to try to make sure that policymakers and citizens can know at least as much about their populations and economies as the internet behemoths.”

It’s true: a picture of the state of the nation taken every ten years just doesn’t provide the granularity that could be fetched, more cheaply and more efficiently, from other sources: “smaller surveys, Ordnance Survey data, GP registrations, driving licence details…”

But this too is true: near where I live there is a pedestrian crossing. There is a button I can push, to change the lights, to let me cross the road. I know that in daylight hours, the button is a dummy, that the lights are on a timer, set in some central office, to smooth the traffic flow. Still, I press that button. I like that button. I appreciate having my agency acknowledged, even in a notional, fanciful way.

Next year, 2021, I will tell the census who and what I am. It’s my duty as a citizen, and also my right, to answer how I will. If, in 2031, the state decides it does not need to ask me who I am, then my idea of myself as a citizen, notional as it is, fanciful as it is, will be impoverished.