Making time for mistakes

Reading In the Long Run: The future as a political idea by Jonathan White for the Financial Times, 2 February 2024

If you believe there really is no time for political mistakes on some crucial issue — climate change, say, or the threat of nuclear annihilation — then why should you accept a leader you did not vote for, or endorse an election result you disagree with? Jonathan White, a political sociologist at the London School of Economics, has written a short book about a coming crisis that democratic politics, he argues, cannot possibly accommodate: the world’s most technologically advanced democracies are losing their faith in the future.

This is not a new thought. In her 2007 book The Shock Doctrine Naomi Klein predicted how governments geared to crisis management would turn ever more dictatorial as their citizens grew ever more distracted and malleable. In the Long Run White is less alarmist but more pessimistic, showing how liberal democracy blossoms, matures, and ultimately shrivels through the way it imagines its own future. Can it survive in the world where high-school students are saying things like ‘I don’t understand why I should be in school if the world is burning’?

A broken constitution, an electorate that’s ignorant or misguided, institutions that are moribund and full of the same old faces, year after year — these are not nearly the serious problems for democracy they appear to be, says White: none of them undermines the ideal, so long as we believe that there’s a process of self-correction going on.

Democracy is predicated on an idea of improvability. It is, says White, “a future-oriented form, always necessarily unfinished”. The health of a democracy lies not in what it thinks of itself now, but in what hopes it has for its future. A few pages on France’s Third Republic — a democratic experiment that, from latter part of the 19th century to the first decades of the 20th, lurched through countless crises and 103 separate cabinets to become the parliamentary triumph of its age — would have made a wonderful digression here, but this is not White’s method. In the Long Run relies more on pithy argument than on historical colour, offering us an exhilarating if sometimes dizzingly abstract historical fly-through of the democratic experiment.

Democracy arose as an idea in the Enlightenment, via the evolution of literary Utopias. White pays special attention to Louis-Sébastien Mercier’s 1771 novel The Year 2440: A Dream if Ever There Was One, for dreaming up institutions that are not just someone’s good idea, but actual extensions of the people’s will.

Operating increasingly industrialised democracies over the course of the 19th century created levels of technocratic management that inevitably got in the way of the popular will. When that process came to a crisis in the early years of the 20th century, much of Europe faced a choice between command-and-control totalitarianism, and beserk fascist populism.

And then fascism, in its determination to remain responsive and intuitive to the people’s will, evolved into Nazism, “an ideology that was always seeking to shrug itself off,” White remarks; “an -ism that could affirm nothing stable, even about itself”. Its disastrous legacy spurred post-war efforts to constrain the future once more, “subordinating politics to economics in the name of stability.” With this insightful flourish, the reader is sent reeling into the maw of the Cold War decades, which turned politics into a science and turned our tomorrows into classifiable resources and tools of competitive advantage.

White writes well about 20th-century ideologies and their endlessly postponed utopias. The blandishments of Stalin and Mao and other socialist dictators hardly need glossing. Mind you, capitalism itself is just as anchored in the notion of jam tomorrow: what else but a faith in the infinitely improvable future could have us replacing our perfectly serviceable smartphones, year after year after year?

And so to the present: has runaway consumerism now brought us to the brink of annihilation, as the Greta Thunbergs of this world claim? For White’s purposes here, the truth of this claim matters less than its effect. Given climate change, spiralling inequality, and the spectres of AI-driven obsolescence, worsening pandemics and even nuclear annihilation, who really believes tomorrow will look anything like today?

How might democracy survive its own obsession with catastrophe? It is essential, White says, “not to lose sight of the more distant horizons on which progressive interventions depend.” But this is less a serious solution, more an act of denial. White may not want to grasp the nettle, but his readers surely will: by his logic (and it seems ungainsayable), the longer the present moment lasts, the worse it’ll be for democracy. He may not have meant this, but White has written a very frightening book.

So that was me told

Visiting Voyage to the Edge of Imagination at London’s Science Museum, 9 November 2022

London’s Science Museum has come up with a solution to the age-old problem of how to keep visitors from bunching up while they tour an exhibition. At an awkward corner of Science Fiction: Voyage to the edge of imagination, ALANN (for Algorithmic Artificial Neural Network) announces that all the air is about to leave the room (sorry: “deck”). To avoid the hard vacuum of outer space, please move along.

Little fillips of jeopardy enliven this whistle-stop tour of science, technology and imagination — not a show about science fiction (and in fact London’s had one of those quite recently: the Barbican’s superb 2017 Into the Unknown) so much as a show that does science fiction. The gallery is arranged as a story, which begins once a Pan Galactic Starlines shuttle drops us aboard a friendly if bemused alien craft, the Azimuth. The Azimuth’s resident AI is orbiting the Earth and pondering the curious nature of human progress, that puts imagination and storytelling ahead of practical action. It seems to ALANN — who jumps from screen to screen, keeping us company throughout — that using stories to imagine the future is a weirdly double-edged way of going about things. Humans could just as easily be steering towards nightmares, as toward happy outcomes. What will their future hold?

ALANN bottles it in the end, of course — our destiny turns out to be “uncomputable”. Oh for a show that had punters running screaming for the exits! Isn’t that what sf is for?

Assembled on a conspicuously low budget, and featuring mainly film props and costumes (which at the best of times never look that good in real life) and replicas (some of them jolly cheap), this “voyage to the edge of imagination” stands or falls by its wits. Next to a cheery video about trying to communicate with humpback whales as a rehearsal for alien “first contact”, some bright spark has placed a life-size xenomorph from the film Alien. Iron Man’s helmet is there to promote our eventual cyborgisation, melding metal and flesh to better handle the technological future — but so, mind you, is Darth Vader’s. The sheer lack of stuff here is disconcerting, but at the end of it all we have explored space, bent spacetime, communicated with aliens, and become posthuman, so clearly something is working. Imagine an excellent nest constructed from three sticks.

What this show might have achieved with a bigger budget is revealed in Glyn Morgan’s excellent accompanying book (Thames and Hudson, £30) featuring interviews with the likes of Charlie Jane Anders and Chen Qiufan.

This being the Science Museum, it’s hardly surprising that the exhibition’s final spaces are given over to pondering science fiction’s utility. Futurologist Brian David Johnson is on screen to explain how fiction can be used to prototype ideas in the real world. (Actual science fiction writers have a word for this: they call it “plagiarism”.) Whether you give credence to Johnson’s belief that sf is there to make the world a better place is a glass-half-full, glass-half-empty sort of question. “Applied science fiction” can be jolly crass. In a cabinet near Mr Johnson are a couple of copies of Marvel’s Captain Planet. In the 1990s, we are told, Captain Planet “empowered a new generation to be environmentally aware.” As someone who was there, I can promise you he jolly well didn’t.

But as I turned the next corner, the sneer still on my lips, I confronted as fine an example of imagination in action as you could wish for: Tilly Lockey, a couple of days off her seventeenth birthday, had been invited along to the press launch, and was skipping about like a dervish, taking photographs of her friend. In the gloom, I couldn’t quite see which bionic arms she was wearing — the ones based on the Deus Ex video game series, or the ones she’d received in 2019, designed by the team creating Alita: Battle Angel.

So that was me told.

668 televisions (some of them broken)

Visiting the Nam June Paik exhibition at Tate Modern for New Scientist, 27 November 2019

A short drive out of Washington DC, in an anonymous industrial unit, there is an enormous storage space crammed to the brim with broken television sets, and rolling stack shelving piled with typewriters, sewing machines and crudely carved coyotes.

This is the archive of the estate of Nam June Paik, the man who predicted the internet, the Web, YouTube, MOOCs, and most other icons of the current information age; an artist who spent much of his time engineering, dismantling, reusing, swapping out components, replacing old technology with better technology, delivering what he could of his vision with the components available to him. Cathode ray tube televisions. Neon. Copper. FORTRAN punch cards. And a video synthesizer, designed with the Tokyo artist-engineer Shuya Abe in 1969. The signature psychedelic video effects of Top of the Pops and MTV began life here.

Paik was born in Seoul in 1932, during the Japanese occupation of Korea, and educated in Germany, where he met the composers Karl-Heinz Stockhausen and John Cage. A fascinating retrospective show currently at London’s Tate Modern celebrates his involvement with that loose confederacy of artist-anarchists known as Fluxus. (Yoko Ono was a patron. David Bowie and Laurie Anderson were hangers-on.)

Beneath Paik’s celebrated, and celebrity-stuffed concerts, openings and “happenings” — there’s what amounts — in the absence of Paik’s controlling intelligence (he died in 2006) — to a pile of junk. 668 televisions, some of them broken. A black box the size of a double refrigerator, containing the hardware to drive one of Paik’s massive “matrices”, Megatron/Matrix, an eight-channel, 215-screen video wall, in pieces now, a nightmare to catalogue, never mind reconstruct, stored in innumerable tea chests.

The trick for Saisha Grayson, the Smithsonian American Art Museum’s curator of time-based media, and Lynn Putney its associate registrar, is to distinguish the raw material of Paik’s work from the work itself. Then curators like Tate Modern’s Sook Kyung Lee must interpret that work for a new generation, using new technology. Because let’s face it: in the end, more or less everything Paik used to make his art will end up in the bin. Consumer electronics aren’t like a painter’s pigments, which can be analysed and copied, or like a sculptor’s marble, which can, at a pinch, be repaired.

“Through Paik’s estate we are getting advice and guidance about what the artist really intended to achieve,” Lee explains, “and then we are simulating those things with new technology.”

Paik’s video walls — the works by which he’s best remembered, are monstrously heavy and absurdly delicate. But the Tate has been able to recreate Paik’s Sistine Chapel for this show. Video projectors to fill a room with a blizzard of cultural and pop-cultural imagery from around the world — a visual melting pot reflective of Paik’s vision of a technological utopia, in which “telecommunication will become our springboard for new and surprising human endeavors.” The projectors are new but the feel of this recreated piece is not so very different to the 1994 original.

To stand here, bombarded by Bowie and Nixon and Mongolian throat singers and all the other flitting, flickering icons of Paik’s madcap future, is to remember all our hopes for the information age: “Video-telephones, fax machines, interactive two-way television… and many other variations of this kind of technology are going to turn the television set into an «expanded-media» telephone system with thousands of novel uses,” Paik enthused in 1974, “not only to serve our daily needs, but to enrich the quality of life itself.”

Visit a hydrogen utopia

On Tuesday 3 December at 7pm I’ll be chairing a discussion at London’s Delfina Foundation about energy utopias, and the potential of hydrogen as a locally-produced sustainable energy source. Speakers include the artist Nick Laessing, Rokiah Yaman (Project Manager, LEAP closed-loop technologies) and Dr Chiara Ambrosio (History and Philosophy of Science, UCL).There may also be food, assuming Nick’s hydrogen stove behaves itself.  More details here.

Stanley Kubrick at the Design Museum

The celebrated film director Stanley Kubrick never took the future for granted. In films as diverse as Dr. Strangelove: or, how I learned to stop worrying and love the bomb (1964) and A Clockwork Orange (1971), Kubrick’s focus was always savagely humane, unpicking the way the places we inhabit make us think and feel. At the opening of a new exhibition at the London Design Museum in Holland Park, David Stock and I spoke to co-curator Adriënne Groen about Kubrick’s most scientifically inflected film, 2001: A Space Odyssey (1968), and how Kubrick masterminded a global effort to imagine one possible future: part technological utopia, part sterile limbo, and, more than 50 years since its release, as gripping as hell.

You can see the interview here.

How Stanley Kubrick‘s collaboration with science fiction writer Arthur C. Clarke led to 2001 is well known. “The ‘really good’ science-fiction movie is a great many years overdue,” Clarke enthused, as the men began their work on a project with the working title Journey Beyond the Stars.

For those who want a broader understanding of how Kubrick gathered, enthused and sometimes (let’s be brutally frank, here) exploited the visionary talent available to him, The Design Museum’s current exhibition is essential viewing. There are prototypes of the pornographic furniture from the opening dolly shot of A Clockwork Orange, inspired by the work of artist Allen Jones but fashioned by assistant production designer Liz Moore when Jones decided not to hitch his cart – and reputation – to Kubrick’s controversial vision.

But it’s the names that recur again and again, from film to film, over decades of creative endeavour, that draw one in. The costume designer Milena Canonero was a Kubrick regular and, far from being swamped, immeasurably enriched Kubrick’s vision. (There’s a wonderful production photograph here of actor Malcolm McDowell trying on some of her differently styled droog hats.)

Kubrick was fascinated by the way people respond to being regimented – by the architectural brutalism of the Thamesmead estate in A Clockwork Orange, or by a savage gunnery sergeant in Full Metal Jacket, or by their own fetishism in Eyes Wide Shut. Kubrick’s fascination with how people think and behave is well served by this show, which will give anyone of a psychological bent much food for thought.

 

A world that has run out of normal

Reading The Uninhabitable Earth: A Story of the Future by David Wallace-Wells for the Telegraph, 16 February 2019

As global temperatures rise, and the mean sea-level with them, I have been tracing the likely flood levels of the Thames Valley, to see which of my literary rivals will disappear beneath the waves first. I live on a hill, and what I’d like to say is: you’ll be stuck with me a while longer than most. But on the day I had set aside to consume David Wallace-Wells’s terrifying account of climate change and the future of our species (there isn’t one), the water supply to my block was unaccountably cut off.

Failing to make a cup of tea reminded me, with some force, of what ought to be obvious: that my hill is a post-apocalyptic death-trap. I might escape the floods, but without clean water, food or power, I’ll be lucky to last a week.

The first half of The Uninhabitable Earth is organised in chapters that deal separately with famines, floods, fires, droughts, brackish oceans, toxic winds and war and all the other manifest effects of anthropogenic climate change (there are many more than four horsemen in this Apocalypse). At the same time, the author reveals, paragraph by paragraph, how these ever-more-frequent disasters join up in horrific cascades, all of which erode human trust to the point where civic life collapses.

The human consequences of climate disaster are going to be ugly. When a million refugees from the Syrian civil war started arriving in Europe in 2017, far-right parties entered mainstream political discourse for the first time in decades. By 2050, the United Nations predicts that Europe will host 200 million refugees. So buckle up. The disgust response with which we greet strangers on our own land is something we conscientiously suppress these days. But it’s still there: an evolved response that in less sanitary times got us through more than one plague.

That such truths go largely unspoken says something about the cognitive dissonance in which our culture is steeped. We just don’t have the mental tools to hold climate change in our heads. Amitav Ghosh made this clear enough in The Great Derangement (2016), which explains why the traditional novel is so hopeless at handling a world that has run out of normal, forgotten how to repeat itself, and will never be any sort of normal again.

Writers, seeking to capture the contemporary moment, resort to science fiction. But the secret, sick appeal of post-apocalyptic narratives, from Richard Jefferies’s After London on, is that in order to be stories at all their heroes must survive. You can only push nihilism so far. J G Ballard couldn’t escape that bind. Neither could Cormac McCarthy. Despite our most conscientious attempts at utter bloody bleakness, the human spirit persists.

Wallace-Wells admits as much. When he thinks of his own children’s future, denizens of a world plunging ever deeper into its sixth major extinction event, he admits that despair melts and his heart fills with excitement. Humans will cling to life on this ever less habitable earth for as long as they can. Quite right, too.

Wallace-Wells is deputy editor of New York magazine. In July 2017 he wrote a cover story outlining worst-case scenarios for climate change. His pessimism proved salutary: The Uninhabitable Earth has been much anticipated.

In the first half of the book the author channels former US vice-president Al Gore, delivering a blizzard of terrifying facts, and knocking socks off his predecessor’s An Inconvenient Truth (2006) not thanks to his native gifts (considerable as they are) but because the climate has deteriorated since then to the point where its declines can now be observed directly, and measured over the course of a human lifetime.

More than half the extra carbon dioxide released into the atmosphere by burning fossil fuels has been added in the past 30 years. This means that “we have done as much damage to the fate of the planet and its ability to sustain human life and civilization since Al Gore published his first book on climate than in all the centuries – all the millennia – that came before.” (4) Oceans are carrying at least 15 per cent more heat energy than they did in 2000. 22 per cent of the earth’s landmass was altered by humans just between 1992 and 2015. In Sweden, in 2018, forests in the Arctic Circle went up in flames. On and on like this. Don’t shoot the messenger, but “we have now engineered as much ruin knowingly as we ever managed in ignorance.”

The trouble is not that the future is bleak. It’s that there is no future. We’re running out of soil. In the United States, it’s eroding ten times faster than it is being replaced. In China and India, soil is disappearing thirty to forty times as fast. Wars over fresh water have already begun. The CO2 in the atmosphere has reduced the nutrient value of plants by about thirty per cent since the 1950s. Within the lifetimes of our children, the hajj will no longer be a feature of Islamic practice: the heat in Mecca will be such that walking seven times counterclockwise around the Kaaba will kill you.

This book may come to be regarded as last truly great climate assessment ever made. (Is there even time left to pen another?) Some of the phrasing will give persnickety climate watchers conniptions. (Words like “eventually” will be a red rag for them, because they catalyse the reader’s imagination without actually meaning anything.) But the research is extensive and solid, the vision compelling and eminently defensible.

Alas, The Uninhabitable Earth is also likely to be one of the least-often finished books of the year. I’m not criticising the prose, which is always clear and engaging and often dazzling. But It’s simply that the more we are bombarded with facts, the less we take in. Treating the reader like an empty bucket into which facts may be poured does not work very well, and even less well when people are afraid of what you are telling them. “If you have made it this far, you are a brave reader,” Wallace Wells writes on page 138. Many will give up long before then. Climate scientists have learned the hard way how difficult it is to turn fact into public engagement.

The second half of The Uninhabitable Earth asks why our being made aware of climate disaster doesn’t lead to enough reasonable action being taken against it. There’s a nuanced mathematical account to be written of how populations reach carrying capacity, run out of resources, and collapse; and an even more difficult book that will explain why we ever thought human intelligence would be powerful enough to elude this stark physical reality.

The final chapters of The Uninhabitable Earth provide neither, but neither are they narrowly partisan. Wallace-Wells mostly resists the temptation to blame the mathematical inevitability of our species’ growth and decline on human greed. The worst he finds to say about the markets and market capitalism – our usual stock villains – is not that they are evil, or psychopathic (or certainly no more evil or psychopathic than the other political experiments we’ve run in the past 150 years) but that they are not nearly as clever as we had hoped they might be. There is a twisted magnificence in the way we are exploiting, rather than adapting to the End Times. (Whole Foods in the US, we are told, is now selling “GMO-free” fizzy water.)

The Paris accords of 2016 established keeping warming to just two degrees as a global goal. Only a few years ago we were hoping for a rise of just 1.5 degrees. What’s the difference? According to the IPCC, that half-degree concession spells death for about 150 million people. Without significantly improved pledges, however, the IPCC reckons that instituting the Paris accords overnight (and no-one has) will still see us topping 3.2 degrees of warming. At this point the Antarctic’s ice sheets will collapse, drowning Miami, Dhaka, Shanghai, Hong Kong and a hundred other cities around the world. (Not my hill, though.)

And to be clear: this isn’t what could happen. This is what is already guaranteed to happen. Greenhouse gases work on too long a timescale to avoid it. “You might hope to simply reverse climate change;” writes Wallace-Wells: “you can’t. It will outrun all of us.”

“How widespread alarm will shape our ethical impulses toward one another, and the politics that emerge from those impulses,” says Wallace-Wells,”is among the more profound questions being posed by the climate to the planet of people it envelopes.”

My bet is the question will never tip into public consciousness: that, on the contrary, we’ll find ways, through tribalism, craft and mischief, to engineer what Wallace-Wells dubs “new forms of indifference”, normalising climate suffering, and exploiting novel opportunities, even as we live and more often die through times that will never be normal again.

The dreams our stuff is made of

To introduce a New Scientist speaking event at London’s Barbican centre on 29 June, I took a moment to wonder why the present looks so futuristic.

Long before we can build something for real, we know how it will work and what it will require by way of materials and design. The steampunk genre gorges on Victorian designs for steam-powered helicopters (yes, there were such things) and the like, with films such as Hugo (2011) and gaming apps such as 80 Days (2014) telescoping the hard business of materials science into the twinkling of a mad professor’s eye. Always, our imaginations run ahead of our physical abilities.

At the same time, science fiction is not at all naive, and almost all of it is about why our dreams of transcendence through technology fail: why the machine goes wrong, or works towards an unforeseen (sometimes catastrophic) end. Blade Runner (1982) didn’t so much inspire the current deluge of in-yer-face urban advertising as realise our worst nightmares about it. Short Circuit (1986) knew what was wrong with robotic warfare long before the first Predator aircraft took to the skies.

So yes, science fiction enters clad in the motley of costume drama: polished, chromed, complete, not infrequently camp. But there’s always a twist, a tear, a weak seam. This genre takes finery from the prop shop and turns it into something vital – a god, a golem, a puzzle, a prison. In science fiction, it matters where you are and how you dress, what you walk on and even what you breathe. All this stuff is contingent, you see. It slips about. It bites.

Sometimes,  in this game of “It’s behind you!” less is more. In Alphaville (1965), futuristic secret agent Lemmy Caution explores the streets of a distant space city, yet there is no set dressing to Alphaville: it is all dialogue, all cut – nothing more than a rhetorical veil cast over contemporary Paris.

More usually, you’ll grab whatever’s to hand – tinsel and Panstick and old gorilla costumes. Two years old by 1965, at least by Earth’s reckoning, William Hartnell’s Time Lord was tearing up the set of Doctor Who and would, in other bodies and other voices, go on tearing up, tearing down and tearing through his fans’ expectations for the next 24 years, production values be damned.

Bigger than its machinery, bigger even than its protagonist, Doctor Who was, in that first, long outing, never in any sense realistic, and that was its strength. You never knew where you’d end up next: a comedy, a horror flick or a Western-style showdown. The Doctor’s sonic screwdriver was the whole point. It said, we’re bolting this together as we go along.

What hostile critics say is true, in that science fiction sometimes is more about the machines than about the people. Metropolis (1927) director Fritz Lang wanted a real rocket launch for the premiere of Frau im Mond (1929) and roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre-tall liquid-propellant rocket came to nought, Oberth set about building a rocket 11 metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily, they ran out of money.

The technocratic ideal may seem sterile now, but its promise was compelling: that we’d all live lives of ease and happiness in space, the moon or Mars, watched over by loving machines – the Robinson family’s stalwart Robot B-9 from Lost in Space, perhaps.

Once Star Trek‘s Federation established heaven on Earth (and elsewhere), however, then we hit a sizeable snag. Gene Roddenberry was right to have pitched his show to Desilu Studios as “wagon train to the stars”, for as Dennis Sisterson’s charming silent parody Steam Trek: The moving picture (1994) demonstrates, the moment you actually reach California, the technology that got you there loses its specialness.

If the teleportation device is not the point of your story, then you may as well use a rappelling rope. Why spend your set budget on an impressive-looking telescope? Why not just have your actor point out of the window? The day your show’s props become merely props is the day you’re not making science fiction any more.

Stanisław Lem: The man with the future inside him

lem

From the 1950s, science fiction writer Stanisław Lem began firing out prescient explorations of our present and far beyond. His vision is proving unparalleled.
For New Scientist, 16 November 2016

“POSTED everywhere on street corners, the idiot irresponsibles twitter supersonic approval, repeating slogans, giggling, dancing…” So it goes in William Burroughs’s novel The Soft Machine (1961). Did he predict social media? If so, he joins a large and mostly deplorable crowd of lucky guessers. Did you know that in Robert Heinlein’s 1948 story Space Cadet, he invented microwave food? Do you care?

There’s more to futurology than guesswork, of course, and not all predictions are facile. Writing in the 1950s, Ray Bradbury predicted earbud headphones and elevator muzak, and foresaw the creeping eeriness of today’s media-saturated shopping mall culture. But even Bradbury’s guesses – almost everyone’s guesses, in fact – tended to exaggerate the contemporary moment. More TV! More suburbia! Videophones and cars with no need of roads. The powerful, topical visions of writers like Frederik Pohl and Arthur C. Clarke are visions of what the world would be like if the 1950s (the 1960s, the 1970s…) went on forever.

And that is why Stanisław Lem, the Polish satirist, essayist, science fiction writer and futurologist, had no time for them. “Meaningful prediction,” he wrote, “does not lie in serving up the present larded with startling improvements or revelations in lieu of the future.” He wanted more: to grasp the human adventure in all its promise, tragedy and grandeur. He devised whole new chapters to the human story, not happy endings.

And, as far as I can tell, Lem got everything – everything – right. Less than a year before Russia and the US played their game of nuclear chicken over Cuba, he nailed the rational madness of cold-war policy in his book Memoirs Found in a Bathtub (1961). And while his contemporaries were churning out dystopias in the Orwellian mould, supposing that information would be tightly controlled in the future, Lem was conjuring with the internet (which did not then exist), and imagining futures in which important facts are carried away on a flood of falsehoods, and our civic freedoms along with them. Twenty years before the term “virtual reality” appeared, Lem was already writing about its likely educational and cultural effects. He also coined a better name for it: “phantomatics”. The books on genetic engineering passing my desk for review this year have, at best, simply reframed ethical questions Lem set out in Summa Technologiae back in 1964 (though, shockingly, the book was not translated into English until 2013). He dreamed up all the usual nanotechnological fantasies, from spider silk space-elevator cables to catastrophic “grey goo”, decades before they entered the public consciousness. He wrote about the technological singularity – the idea that artificial superintelligence would spark runaway technological growth – before Gordon Moore had even had the chance to cook up his “law” about the exponential growth of computing power. Not every prediction was serious. Lem coined the phrase “Theory of Everything”, but only so he could point at it and laugh.

He was born on 12 September 1921 in Lwów, Poland (now Lviv in Ukraine). His abiding concern was the way people use reason as a white stick as they steer blindly through a world dominated by chance and accident. This perspective was acquired early, while he was being pressed up against a wall by the muzzle of a Nazi machine gun – just one of several narrow escapes. “The difference between life and death depended upon… whether one went to visit a friend at 1 o’clock or 20 minutes later,” he recalled.

Though a keen engineer and inventor – in school he dreamed up the differential gear and was disappointed to find it already existed – Lem’s true gift lay in understanding systems. His finest childhood invention was a complete state bureaucracy, with internal passports and an impenetrable central office.

He found the world he had been born into absurd enough to power his first novel (Hospital of the Transfiguration, 1955), and might never have turned to science fiction had he not needed to leap heavily into metaphor to evade the attentions of Stalin’s literary censors. He did not become really productive until 1956, when Poland enjoyed a post-Stalinist thaw, and in the 12 years following he wrote 17 books, among them Solaris (1961), the work for which he is best known by English speakers.

Solaris is the story of a team of distraught experts in orbit around an inscrutable and apparently sentient planet, trying to come to terms with its cruel gift-giving (it insists on “resurrecting” their dead). Solaris reflects Lem’s pessimistic attitude to the search for extraterrestrial intelligence. It’s not that alien intelligences aren’t out there, Lem says, because they almost certainly are. But they won’t be our sort of intelligences. In the struggle for control over their environment they may as easily have chosen to ignore communication as respond to it; they might have decided to live in a fantastical simulation rather than take their chances any longer in the physical realm; they may have solved the problems of their existence to the point at which they can dispense with intelligence entirely; they may be stoned out of their heads. And so on ad infinitum. Because the universe is so much bigger than all of us, no matter how rigorously we test our vaunted gift of reason against it, that reason is still something we made – an artefact, a crutch. As Lem made explicit in one of his last novels, Fiasco (1986), extraterrestrial versions of reason and reasonableness may look very different to our own.

Lem understood the importance of history as no other futurologist ever has. What has been learned cannot be unlearned; certain paths, once taken, cannot be retraced. Working in the chill of the cold war, Lem feared that our violent and genocidal impulses are historically constant, while our technical capacity for destruction will only grow.

Should we find a way to survive our own urge to destruction, the challenge will be to handle our success. The more complex the social machine, the more prone it will be to malfunction. In his hard-boiled postmodern detective story The Chain of Chance (1975), Lem imagines a very near future that is crossing the brink of complexity, beyond which forms of government begin to look increasingly impotent (and yes, if we’re still counting, it’s here that he makes yet another on-the-money prediction by describing the marriage of instantly accessible media and global terrorism).

Say we make it. Say we become the masters of the universe, able to shape the material world at will: what then? Eventually, our technology will take over completely from slow-moving natural selection, allowing us to re-engineer our planet and our bodies. We will no longer need to borrow from nature, and will no longer feel any need to copy it.

At the extreme limit of his futurological vision, Lem imagines us abandoning the attempt to understand our current reality in favour of building an entirely new one. Yet even then we will live in thrall to the contingencies of history and accident. In Lem’s “review” of the fictitious Professor Dobb’s book Non Serviam, Dobb, the creator, may be forced to destroy the artificial universe he has created – one full of life, beauty and intelligence – because his university can no longer afford the electricity bills. Let’s hope we’re not living in such a simulation.

Most futurologists are secret utopians: they want history to end. They want time to come to a stop; to author a happy ending. Lem was better than that. He wanted to see what was next, and what would come after that, and after that, a thousand, ten thousand years into the future. Having felt its sharp end, he knew that history was real, that the cause of problems is solutions, and that there is no perfect world, neither in our past nor in our future, assuming that we have one.

By the time he died in 2006, this acerbic, difficult, impatient writer who gave no quarter to anyone – least of all his readers – had sold close to 40 million books in more than 40 languages, and earned praise from futurologists such as Alvin Toffler of Future Shock fame, scientists from Carl Sagan to Douglas Hofstadter, and philosophers from Daniel Dennett to Nicholas Rescher.

“Our situation, I would say,” Lem once wrote, “is analogous to that of a savage who, having discovered the catapult, thought that he was already close to space travel.” Be realistic, is what this most fantastical of writers advises us. Be patient. Be as smart as you can possibly be. It’s a big world out there, and you have barely begun.

 

The tomorrow person

gettyimages-480014817-800x533

You Belong to the Universe: Buckminster Fuller and the future by Jonathon Keats
reviewed for New Scientist, 11 June 2016.

 

IN 1927 the suicidal manager of a building materials company, Richard Buckminster (“Bucky”) Fuller, stood by the shores of Lake Michigan and decided he might as well live. A stern voice inside him intimated that his life after all had a purpose, “which could be fulfilled only by sharing his mind with the world”.

And share it he did, tirelessly for over half a century, with houses hung from masts, cars with inflatable wings, a brilliant and never-bettered equal-area map of the world, and concepts for massive open-access distance learning, domed cities and a new kind of playful, collaborative politics. The tsunami that Fuller’s wing flap set in motion is even now rolling over us, improving our future through degree shows, galleries, museums and (now and again) in the real world.

Indeed, Fuller’s”comprehensive anticipatory design scientists” are ten-a-penny these days. Until last year, they were being churned out like sausages by the design interactions department at the Royal College of Art, London. Futurological events dominate the agendas of venues across New York, from the Institute for Public Knowledge to the International Center of Photography. “Science Galleries”, too, are popping up like mushrooms after a spring rain, from London to Bangalore.

In You Belong to the Universe, Jonathon Keats, himself a critic, artist and self-styled “experimental philosopher”, looks hard into the mirror to find what of his difficult and sometimes pantaloonish hero may still be traced in the lineaments of your oh-so-modern “design futurist”.

Be in no doubt: Fuller deserves his visionary reputation. He grasped in his bones, as few have since, the dynamism of the universe. At the age of 21, Keats writes, “Bucky determined that the universe had no objects. Geometry described forces.”

A child of the aviation era, he used materials sparingly, focusing entirely on their tensile properties and on the way they stood up to wind and weather. He called this approach “doing more with less”. His light and sturdy geodesic dome became an icon of US ingenuity. He built one wherever his country sought influence, from India to Turkey to Japan.

Chapter by chapter, Keats asks how the future has served Fuller’s ideas on city planning, transport, architecture, education. It’s a risky scheme, because it invites you to set Fuller’s visions up simply to knock them down again with the big stick of hindsight. But Keats is far too canny for that trap. He puts his subject into context, works hard to establish what would and would not be reasonable for him to know and imagine, and explains why the history of built and manufactured things turned out the way it has, sometimes fulfilling, but more often thwarting, Fuller’s vision.

This ought to be a profoundly wrong-headed book, judging one man’s ideas against the entire recent history of Spaceship Earth (another of Fuller’s provocations). But You Belong to the Universe says more about Fuller and his future in a few pages than some whole biographies, and renews one’s interest – if not faith – in all those graduate design shows.