The ambition of transhumanism

Mark O’Connell’s To Be a Machine, a travelogue of strange journeys and bizarre encounters among transhumanists, won the 2018 Wellcome Book Prize. Wearing my New Scientist hat I asked O’Connell how he managed to give transhumanism a human face – despite his own scepticism.

Has transhumanism ever made personal sense to you?

Transhumanism’s critique of the human condition, its anxiety around having to die — that’s something I have some sympathy with, for sure, and that’s where the book began. The idea was for the door to some kind of conversion to be always open. But I was never really convinced that the big ideas in transhumanism, things like mind-uploading and so on, were really plausible. The most interesting question for me was, “Why would anyone want this?”

A lot of transhumanist thought is devoted to evading death. Do the transhumanists you met get much out of life?

I wouldn’t want to be outright prescriptive about what it means to live a meaningful life. I’m still trying to figure that one out myself. I think if you’re so devoted to the idea that we can outrun death, and that death makes life utterly meaningless, then you are avoiding the true animal nature of what it means to be human. But I find myself moving back and forth between that position and one that says, you know what, these people are driven by a deep, Promethean project. I don’t have the deep desire to shake the world to its core that these people have. In that sense, they’re living life to its absolute fullest.

What most sticks in your mind from your researches for the book?

The place that sticks in my mind most clearly is Alcor’s cryogenic life extension facility. In terms of just the visuals, it’s bizarre. You’re walking around what’s known as a “patient care bay”, among these gigantic stainless steel cylinders filled with corpses and severed heads that they’re going to unfreeze once a cure for death is found. The thing that really grabbed me was the juxtaposition between the sci-fi level of the thing and the fact that it was situated in a business park on the outskirts of Phoenix, next door to Big D’s Floor Covering Supplies and a tile showroom.

They do say the future arrives unevenly…

I think we’re at a very particular cultural point in terms of our relationship to “the future”. We aren’t really thinking of science as this boundless field of possibility any more, and so it seems bit of a throwback, like something from an Arthur C. Clarke story. It’s like the thing with Elon Musk. Even the global problems he identifies — rogue AI, and finding a new planet that we can live on to perpetuate the species — seem so completely removed from actual problems that people are facing right now that they’re absurd. A handful of people who seem to wield almost infinite technological resources are devoting themselves to completely speculative non-problems. They’re not serious, on some basic level.

Are you saying transhumanism is a product of an unreal Silicon Valley mentality?

The big cultural influence over transhumanism, the thing that took it to the next level, seems to have been the development of the internet in the late 1990s. That’s when it really became a distinct social movement, as opposed to a group of more-or-less isolated eccentric thinkers and obsessives.

But it’s very much a global movement. I met a lot of Europeans – Russia in particular has a long prehistory of attempts to evade death. But most transhumanists have tended to end up in the US and specifically in Silicon Valley. I suppose that’s because these kinds of ideas get most traction there. You don’t get people laughing at you when you mention want to live forever.

The one person I really found myself grappling with, in the most profound and unsettling way, was Randal Koene. It’s his idea of uploading the human mind to a computer that I find most deeply troubling and offensive, and kind of absurd. As a person and as a communicator, though, Koene was very powerful. A lot of people who are pushing forward these ideas — people like Ray Kurzweil — tend to be impresarios. Randal was the opposite. He was very quietly spoken, very humble, very much the scientist. There were moments he really pushed me out of my scepticism – and I liked him.

Is transhumanism science or religion?

It’s not a religion: there’s no God, for instance. But at the same time I think it very obviously replaces religion in terms of certain basic yearnings and anxieties. The anxiety about death is the obvious one.

There is a very serious religious subtext to all of transhumanism’s aspirations. And at the same time, transhumanists absolutely reject that thinking, because it tends to undermine their perception of themselves as hardline rationalists and deeply science-y people. Mysticism is quite toxic to their sense of themselves.

Will their future ever arrive?

On one level, it’s already happening. We’re walking round in this miasma of information and data, almost in a state of merger with technology. That’s what we’re grappling with as a culture. But if that future means an actual merger of artificial intelligence and human intelligence, I think that’s a deeply terrifying idea, and not, touch wood, something that is ever going to happen.

Should we be worried?

That is why I’m now writing about a book about apocalyptic anxieties. It’s a way to try to get to grips with our current political and cultural moment.

To Be a Machine: Adventures among cyborgs, utopians, hackers, and the futurists solving the modest problem of death
Mark O’Connell
Granta/Doubleday

Elements of surprise

Reading Vera Tobin’s Elements of Surprise for New Scientist, 5 May 2018

How do characters and events in fiction differ from those in real life? And what is it about our experience of life that fiction exaggerates, omits or captures to achieve its effects?

Effective fiction is Vera Tobin’s subject. And as a cognitive scientist, she knows how pervasive and seductive it can be, even in – or perhaps especially in – the controlled environment of an experimental psychology lab.

Suppose, for instance, you want to know which parts of the brain are active when forming moral judgements, or reasoning about false beliefs. These fields and others rest on fMRI brain scans. Volunteers receive short story prompts with information about outcomes or character intentions and, while their brains are scanned, have to judge what other characters ought to know or do.

“As a consequence,” writes Tobin in her new book Elements of Surprise, “much research that is putatively about how people think about other humans… tells us just as much, if not more, about how study participants think about characters in constructed narratives.”

Tobin is weary of economists banging on about the “flaws” in our cognitive apparatus. “The science on this phenomenon has tended to focus on cataloguing errors people make in solving problems or making decisions,” writes Tobin, “but… its place and status in storytelling, sense-making, and aesthetic pleasure deserve much more attention.”

Tobin shows how two major “flaws” in our thinking are in fact the necessary and desirable consequence of our capacity for social interaction. First, we wildly underestimate our differences. We model each other in our heads and have to assume this model is accurate, even while we’re revising it, moment to moment. At the same time, we have to assume no one else has any problem performing this task – which is why we’re continually mortified to discover other people have no idea who we really are.

Similarly, we find it hard to model the mental states of people, including our past selves, who know less about something than we do. This is largely because we forget how we came to that privileged knowledge.

“Tobin is weary of economists banging on about the ‘flaws’ in our cognitive apparatus”
There are implications for autism, too. It is, Tobin says, unlikely that many people with autism “lack” an understanding that others think differently – known as “theory of mind”. It is more likely they have difficulty inhibiting their knowledge when modelling others’ mental states.

And what about Emma, titular heroine of Jane Austen’s novel? She “is all too ready to presume that her intentions are unambiguous to others and has great difficulty imagining, once she has arrived at an interpretation of events, that others might believe something different”, says Tobin. Austen’s brilliance was to fashion a plot in which Emma experiences revelations that confront the consequences of her “cursed thinking” – a cognitive bias making us assume any person with whom we communicate has the background knowledge to understand what is being said.

Just as we assume others know what we’re thinking, we assume our past selves thought as we do now. Detective stories exploit this foible. Mildred Pierce, Michael Curtiz’s 1945 film, begins at the end, as it were, depicting the story’s climactic murder. We are fairly certain we know who did it, but we flashback to the past and work forward to the present only to find that we have misinterpreted everything.

I confess I was underwhelmed on finishing this excellent book. But then I remembered Sherlock Holmes’s complaint (mentioned by Tobin) that once he reveals the reasoning behind his deductions, people are no longer impressed by his singular skill. Tobin reveals valuable truths about the stories we tell to entertain each other, and those we tell ourselves to get by, and how they are related. Like any good magic trick, it is obvious once it has been explained.

Shell game

Reading Catching Thunder by by Eskil Engdal and Kjetil Saeter for the Daily Telegraph, 1 April 2018 

In March 23 1969 the shipbuilders of Ulsteinvik in Norway launched a stern trawler called the Vesturvon. It was their most advanced factory trawler yet, beautiful as these ships go, and big: outfitted for a crew of 47.

In 2000, after many adventures, the ship suffered a midlife crisis. Denied a renewal of their usual fishing quota, its owners partnered up with a Russian company and sent the ship, renamed the Rubin, to ply the Barents Sea. There, in the words of Eskil Engdal and Kjetil Saeter, two Norwegian journalists, the ship slipped ineluctably into “a maelstrom of shell corporations, bizarre ships registers and shady expeditions”.

In the years that followed, the ship changed its name often: Kuko, Wuhan No 4, Ming No 5, Batu 1. Its crew had to look over the side of the ship at the name plate, attached that morning to the stern, to find out which ship they were on. Flags from countries such as Equatorial Guinea, Mauritania and Panama were kept in a cardboard box.

It fell to a Chilean, Luis Cataldo, to be captaining the ship (then named the Thunder) on December 17 2014 – the day when, off Antarctica’s windy Banzare Bank, in the middle of an illegal fishing expedition, it was spotted by the Bob Barker, a craft belonging to the Sea Shepherd Conservation Society. The Bob Barker’s captain got on the radio and told Cataldo his vessel was wanted by Interpol and should follow him to port.

Cataldo retorted that he wasn’t inclined to obey a ship whose black flag bore a skull (albeit with a shepherd’s crook and a trident instead of crossbones). And it is fair to say that the Sea Shepherd organisation, whose mission “is to end the destruction of habitat and slaughter of wildlife in the world’s oceans”, has enjoyed a fairly anomalous relationship with nautical authority since its foundation in 1977.

So began the world’s longest sea chase to date, recorded with flair and precision in Catching Thunder, Diane Oatley’s effortlessly noir translation of Engdal and Saeter’s 2016 Norwegian bestseller. The book promises all the pleasures of a crime novel, but it is after bigger game: let’s call it the unremitting weirdness of the real world.

This is a book about fish – and also a chase narrative in which the protagonists spend most of the time sailing in circles and sending each other passive-aggressive radio messages. (“You are worried about the crew, and now all the Indonesians are nervous,” Cataldo complains. “One person attempted to take his life. Over.”)

It’s about attempting to regulate the movement of lumps of steel weighing more than 650 tons which, if they want, can thug their way out of any harbour whether they’ve been “impounded” or not, and it’s about the sheer slow-mo clumsiness of ship-handling.

At one point the Thunder “moves in circles, directing a searchlight on the Bob Barker, then suddenly stops and drifts for a few hours. Then the mate puts the ship in motion again, heading for a point in the middle of nowhere.” There’s no Hollywood hot-headedness here. The violence here is rare, veiled and, when it comes, unstoppable and ice-cold.

The Thunder was wanted for hunting the Patagonian toothfish, a protected species of “petulant and repulsive” giants that can grow to a weight of 120kg and live more than 50 years. When the Bob Barker caught sight of it in the Southern Ocean, no one could have guessed that their chase would last for 110 days.

Stoked by Sea Shepherd’s YouTube campaign, the pursuit became a cause célèbre and the Bob Barker’s hardened crew were prepared for the long game: “As long as the two ships are operating without using the engines, it is only the generators that are consuming fuel. If it continues like this, they can be at sea for two years.”

Engdal and Saeter must keep their human story going while doing justice to the scale of their subject. At the start, their subject is the fishing industry, in which a cargo of frozen toothfish can go “on a circumnavigation of the world from the Southern Ocean to Thailand, then around the entire African continent, past the Horn of Africa, across the Indian Ocean and into the South China Sea before ending up in Vietnam.” But they also have something to say about the planet.

Suppose you catch fish for a living. If you saw that your catch was dwindling, you might limit your days at sea to ensure that you can continue to fish that species in future years. This isn’t “ecological thinking”; it’s simple self-interest. In the fishing industry, though, self-interest works differently.

And in a chapter about Chimbote in Peru, the authors hit upon a striking metonym for the global mechanisms denuding our seas.

The Peruvian anchovy boom of the late 2000s turned Chimbote from a sleepy village into Peru’s busiest fishing port. Fifty factories exuded a stench of rotten fish, and pumped wastewater and fish blood into the ocean, to the point where the local ecosystem was so damaged that an ordinary El Niño event finished off the anchovy stocks for good.

The point is this: fishing companies are not fisherfolk. They are companies: lumps of capital incorporated to maximise returns on investment. It makes no sense for an extraction company to limit its consumption of a resource.

Once stocks have been reduced to nothing, the company simply reinvests its capital in some other, more available resource. You can put rules in place to limit the rapaciousness of the enterprise, but the rapaciousness is baked in. Rare resources are doomed to extinction eventually because the rarer a resource is, the more expensive it is, and the more incentive there is to trade in it. This is why, past a certain point, rare stocks hurtle towards zero.

Politically savvy readers will find, between the lines, an account here of how increasingly desperate governments are coming to a rapprochement with the Sea Shepherd organisation, whose self-consciously piratical founder Paul Watson declared in 1988: “We hold the position that the laws of ecology take precedence over the laws designed by nation states to protect corporate interests.”

Watson’s position seems legally extreme. But 30 years on, with an ecological catastrophe looming, many maritime law enforcers hardly care. Robbed of income and ecological capital, some countries are getting gnarly. In 2016, Indonesian authorities sank 170 foreign fishing vessels in less than two years. They would like to sink many more: according to this daunting thriller, 5,000 illegal fishing vessels ply their waters at any one time.

Pushing the boundaries

Rounding up some cosmological pop-sci for New Scientist, 24 March 2018

IN 1872, the physicist Ludwig Boltzmann developed a theory of gases that confirmed the second law of thermodynamics, more or less proved the existence of atoms and established the asymmetry of time. He went on to describe temperature, and how it governed chemical change. Yet in 1906, this extraordinary man killed himself.

Boltzmann is the kindly if gloomy spirit hovering over Peter Atkins’s new book, Conjuring the Universe: The origins of the laws of nature. It is a cheerful, often self-deprecating account of how most physical laws can be unpacked from virtually nothing, and how some constants (the peculiarly precise and finite speed of light, for example) are not nearly as arbitrary as they sound.

Atkins dreams of a final theory of everything to explain a more-or-less clockwork universe. But rather than wave his hands about, he prefers to clarify what can be clarified, clear his readers’ minds of any pre-existing muddles or misinterpretations, and leave them, 168 succinct pages later, with a rather charming image of him tearing his hair out over the fact that the universe did not, after all, pop out of nothing.

It is thanks to Atkins that the ideas Boltzmann pioneered, at least in essence, can be grasped by us poor schlubs. Popular science writing has always been vital to science’s development. We ignore it at our peril and we owe it to ourselves and to those chipping away at the coalface of research to hold popular accounts of their work to the highest standards.

Enter Brian Clegg. He is such a prolific writer of popular science, it is easy to forget how good he is. Icon Books is keeping him busy writing short, sweet accounts for its Hot Science series. The latest, by Clegg, is Gravitational Waves: How Einstein’s spacetime ripples reveal the secrets of the universe.

Clegg delivers an impressive double punch: he transforms a frustrating, century-long tale of disappointment into a gripping human drama, affording us a vivid glimpse into the uncanny, depersonalised and sometimes downright demoralising operations of big science. And readers still come away wishing they were physicists.

Less polished, and at times uncomfortably unctuous, Catching Stardust: Comets, asteroids and the birth of the solar system is nevertheless a promising debut from space scientist and commentator Natalie Starkey. Her description of how, from the most indirect evidence, a coherent history of our solar system was assembled, is astonishing, as are the details of the mind-bogglingly complex Rosetta mission to rendezvous with comet 67P/Churyumov-Gerasimenko – a mission in which she was directly involved.

It is possible to live one’s whole life within the realms of science and discovery. Plenty of us do. So it is always disconcerting to be reminded that longer-lasting civilisations than ours have done very well without science or formal logic, even. And who are we to say they afforded less happiness and fulfilment than our own?

Nor can we tut-tut at the way ignorant people today ride science’s coat-tails – not now antibiotics are failing and the sixth extinction is chewing its way through the food chain.

Physicists, especially, find such thinking well-nigh unbearable, and Alan Lightman speaks for them in his memoir Searching for Stars on an Island in Maine. He wants science to rule the physical realm and spirituality to rule “everything else”. Lightman is an elegant, sensitive writer, and he has written a delightful book about one man’s attempt to hold the world in his head.

But he is wrong. Human culture is so rich, diverse, engaging and significant, it is more than possible for people who don’t give a fig for science or even rational thinking to live lives that are meaningful to themselves and valuable to the rest of us.

“Consilience” was biologist E.O. Wilson’s word for the much-longed-for marriage of human enquiry. Lightman’s inadvertent achievement is to show that the task is more than just difficult, it is absurd.

Writing about knowing

Reading John Brockman’s anthology This Idea Is Brilliant: Lost, overlooked, and underappreciated scientific concepts everyone should know for New Scientist, 24 February 2018 

Literary agent and provocateur John Brockman has turned popular science into a sort of modern shamanism, packaged non-fiction into gobbets of smart thinking, made stars of unlikely writers and continues to direct, deepen and contribute to some of the most hotly contested conversations in civic life.

This Idea Is Brilliant is the latest of Brockman’s annual anthologies drawn from edge.org, his website and shop window. It is one of the stronger books in the series. It is also one of the more troubling, addressing, informing and entertaining a public that has recently become extraordinarily confused about truth and falsehood, fact and knowledge.

Edge.org’s purpose has always been to collide scientists, business people and public intellectuals in fruitful ways. This year, the mix in the anthology leans towards the cognitive sciences, philosophy and the “freakonomic” end of the non-fiction bookshelf. It is a good time to return to basics: to ask how we know what we know, what role rationality plays in knowing, what tech does to help and hinder that knowing, and, frankly, whether in our hunger to democratise knowledge we have built a primrose-lined digital path straight to post-truth perdition.

Many contributors, biting the bullet, reckon so. Measuring the decline in the art of conversation against the rise of social media, anthropologist Nina Jablonski fears that “people are opting for leaner modes of communication because they’ve been socialized inadequately in richer ones”.

Meanwhile, an applied mathematician, Coco Krumme, turning the pages of Jorge Luis Borges’s short story The Lottery in Babylon, conceptualises the way our relationship with local and national government is being automated to the point where fixing wayward algorithms involves the applications of yet more algorithms. In this way, civic life becomes opaque and arbitrary: a lottery. “To combat digital distraction, they’d throttle email on Sundays and build apps for meditation,” Krumme writes. “Instead of recommender systems that reveal what you most want to hear, they’d inject a set of countervailing views. The irony is that these manufactured gestures only intensify the hold of a Babylonian lottery.”

Of course, IT wasn’t created on a whim. It is a cognitive prosthesis for significant shortfalls in the way we think. Psychologist Adam Waytz cuts to the heart of this in his essay “The illusion of explanatory depth” – a phrase describing how people “feel they understand the world with far greater detail, coherence and depth than they really do”.

Humility is a watchword here. If our thinking has holes in it, if we forget, misconstrue, misinterpret or persist in false belief, if we care more for the social consequences of our beliefs than their accuracy, and if we suppress our appetite for innovation in times of crisis (all subjects of separate essays here), there are consequences. Why on earth would we imagine we can build machines that don’t reflect our own biases, or don’t – in a ham-fisted effort to correct for them – create ones of their own we can barely spot, let alone fix?

Neuroscientist Sam Harris is one of several here who, searching for a solution to the “truthiness” crisis, simply appeals to basic decency. We must, he argues, be willing to be seen to change our minds: “Wherever we look, we find otherwise sane men and women making extraordinary efforts to avoid changing [them].”

He has a point. Though our cognitive biases, shortfalls and the like make us less than ideal rational agents, evolution has equipped us with social capacities that, smartly handled, run rings round the “cleverest” algorithm.

Let psychologist Abigail Marsh have the last word: “We have our flaws… but we can also claim to be the species shaped by evolution to possess the most open hearts and the greatest proclivity for caring on Earth.” This may, when all’s said and done, have to be enough.

What’s the Russian for Eastbourne?

Reading Davies and Kent’s Red Atlas for the Telegraph, 13 January 2018

This is a journey through an exotic world conjured into being by the Military Topographic Directorate of the General Staff of the Soviet Army. Tasked by Stalin during the Second World War to accurately and secretly map the Soviet Union, its Eastern European allies, its Western adversaries, and the rest of the world, the Directorate embarked on the largest mapping effort in history, Too many maps have been lost for us to be entirely sure what coverage was attained, but it must have been massive. Considering the UK alone, if there are detailed street plans of the market town of Gainsborough in Lincolnshire, we can be reasonably sure there were once maps of Carlisle and Hull.

From internal evidence (serial numbers and such-like) we know there were well in excess of 1 million maps produced. Only a few survive today, and the best preserved of them, the most beautiful, the most peculiar, the most chilling, are reproduced here. The accompanying text, by cartographers John Davies and Alexander Kent, is rich in detail, and it needs to be. Soviet intelligence maps boast a level of detail that puts our own handsome Ordnance Survey to shame — a point the authors demonstrate by putting OS maps alongside their Soviet counterparts. You can not only see my road from one of these Soviet maps: you can see how tall the surrounding buildings are. You can read the height of a nearby bridge above water, its dimensions, its load capacity, and what it is made of. As for the river, I now know its width, its flow, its depth, and whether it has a viscous bed (it hasn’t).

This is not a violent tale. There is little evidence that the mapmakers had invasion on their mind. What would have been the point? By the time Russian tanks were rolling down the A14 (Cambridge, UK, 1:10,000 City Plan of 1998), nuclear exchanges would have obliterated most of these exquisite details, carefully garnered from aerial reconnaissance, archival research, Zenit satellite footage and, yes, wonderfully, non-descript men dawdling outside factory gates and police stations. Maybe the maps were for them and their successors. Placenames are rendered phonetically: HEJSTYNZ for Hastings and “ISBON” for Eastbourne on one Polish map. This would have been useful if you were asking directions, but useless if you were in a tank speeding through hostile territory, trying to read the road signs. The Directorate’s city maps came with guides. Some of the details recorded here are sinister enough: Cambridgeshire clay “becomes waterlogged and severely impedes off-road movement of mechanized transport.” Its high hedges “significantly impede observation of the countryside”. But what are we to make of the same guide’s wistful description of the city itself? “The bank of the river Cam is lined with ivy-clad buildings of the colleges of the university, with ridged roofs and turrets… The lodging-houses with their lecture-halls are reminiscent of monasteries or ancient castles.”

Though deployed on an industrial scale, the Soviet mapmakers were artisans, who tried very hard to understand a world they would never have any opportunity to see. They did a tremendous job: why else would their maps have informed the US invasion of Afghanistan, water resource management in Armenia, or oil exploration in India? Now and again their cultural assumptions led them down strange paths. Ecclesiastical buildings lost all significance in the Republic of Ireland, whose landscape became dotted with disused watermills. In England, Beeching’s cull of the railways was incomprehensible to Russian mapmakers, for whom railways were engines of revolution. A 1971 map of Leeds sheet not only shows lines closed in the 1960s; it also depicts and names the Wellington terminus station, adjacent to City station, which had closed in 1938.

The story of the Soviets’ mapping and remapping, particularly of the UK, is an eerie one, and though their effort seems impossibly Heath-Robinson now, the reader is never tempted into complacency. Cartography remains an ambiguous art. For evidence, go to Google Maps and type in “Burghfield”. It’s a village near Reading, home to a decommissioned research station of the Atomic Weapons Establishment. Interestingly, the authors claim that though the site is visible in detail through Google Earth, for some reason Google Maps has left the site blank and unlabelled.

This claim is only partly true. The label is there, though it appears at only the smallest available scale of the map. Add the word “atomic” to your search string, and you are taken to an image that, if not particularly informative, is still adequate for a visit.

Two thoughts followed hard on this non-discovery of mine. First, that I should let this go: my idea of “adequate” mapping is likely to be a lot less rigorous than the authors’; anyway it is more than possible that this corner of Google Maps has been updated since the book went to press. Second, that my idle fact-checking placed me in a new world — or at any rate, one barely out of its adolescence. (Keyhole, the company responsible for what became Google Earth, was founded in 2001.)

Today anyone with a broadband connection can drill down to information once considered the prerogative of government analysts. Visit Google Earth’s Russia, and you can find traces of the forest belts planted as part of Stalin’s Great Transformation of Nature in 1948. You can see how industrial combines worked their way up the Volga, building hydroelectric plants that drowned an area the size of France with unplanned swamps. There’s some chauvinistic glee to be had from this, but in truth, intelligence has become simply another digital commodity: stuff to be mined, filleted, mashed up, repackaged. Open-source intelligence: OSINT. There are conferences about it. Workshops. Artworks.

The Red Atlas is not about endings. It is about beginnings. The Cold War, far from being over, has simply subsumed our civic life. Everyone is in the intelligence business now.

Future by design

The Second Digital Turn: Design beyond intelligence
Mario Carpo
MIT Press

THE Polish futurist Stanislaw Lem once wrote: “A scientist wants an algorithm, whereas the technologist is more like a gardener who plants a tree, picks apples, and is not bothered about ‘how the tree did it’.”

For Lem, the future belongs to technologists, not scientists. If Mario Carpo is right and the “second digital turn” described in his extraordinary new book comes to term, then Lem’s playful, “imitological” future where analysis must be abandoned in favour of creative activity, will be upon us in a decade or two. Never mind our human practice of science, science itself will no longer exist, and our cultural life will consist of storytelling, gesture and species of magical thinking.

Carpo studies architecture. Five years ago, he edited The Digital Turn in Architecture 1992-2012, a book capturing the curvilinear, parametric spirit of digital architecture. Think Frank Gehry’s Guggenheim Museum in Bilbao – a sort of deconstructed metal fish head – and you are halfway there.

Such is the rate of change that five years later, Carpo has had to write another book (the urgency of his prose is palpable and thrilling) about an entirely different kind of design. This is a generative design powered by artificial intelligence, with its ability to thug through digital simulations (effectively, breaking things on screen until something turns up that can’t be broken) and arriving at solutions that humans and their science cannot better.

This kind of design has no need of casts, stamps, moulds or dies. No costs need be amortised. Everything can be a one-off at the same unit cost.

Beyond the built environment, it is the spiritual consequences of this shift that matter, for by its light Carpo shows all cultural history to be a gargantuan exercise in information compression.

Unlike their AIs, human beings cannot hold much information at any one time. Hence, for example, the Roman alphabet: a marvel of compression, approximating all possible vocalisations with just 26 characters. Now that we can type and distribute any glyph at the touch of a button, is it any wonder emojis are supplementing our tidy 26-letter communications?

Science itself is simply a series of computational strategies to draw the maximum inference from the smallest number of precedents. Reduce the world to rules and there is no need for those precedents. We have done this for so long and so well some of us have forgotten that “rules” aren’t “real” rules, they are just generalisations.

AIs simply gather or model as many precedents as they wish. Left to collect data according to their own strengths, they are, Carpo says, “postscientific”. They aren’t doing science we recognise: they are just thugging.

“Carpo shows all cultural history to be a gargantuan exercise in information compression”

Carpo foresees the “separation of the minds of the thinkers from the tools of computation”. But in that alienation, I think, lies our reason to go on. Because humans cannot handle very much data at any one time, sorting is vital, which means we have to assign meaning. Sorting is therefore the process whereby we turn data into knowledge. Our inability to do what computers can do has a name already: consciousness.

Carpo’s succinctly argued future has us return to a tradition of orality and gesture, where these forms of communication need no reduction or compression since our tech will be able to record, notate, transmit, process and search them, making all cultural technologies developed to handle these tasks “equally unnecessary”. This will be neither advance nor regression. Evolution, remember, is maddeningly valueless.

Could we ever have evolved into Spock-like hyper-rationality? I doubt it. Carpo’s sincerity, wit and mischief show that Prospero is more the human style. Or Peter Pan, who observed: “You can have anything in life, if you will sacrifice everything else for it.”

 

Stalin’s meteorologist

I reviewed Olivier Rolin’s new book for The Daily Telegraph

750,000 shot. This figure is exact; the Soviet secret police, the NKVD, kept meticulous records relating to their activities during Stalin’s Great Purge. How is anyone to encompass in words this horror, barely 80 years old? Some writers find the one to stand for the all: an Everyman to focus the reader’s horror and pity. Olivier Rolin found his when he was shown drawings and watercolours made by Alexey Wangenheim, an inmate of the Solovki prison camp in Russia’s Arctic north. He made them for his daughter, and they are reproduced as touching miniatures in this slim, devastating book, part travelogue, part transliteration of Wangenheim’s few letters home.

While many undesirables were labelled by national or racial identity, a huge number were betrayed by their accomplishments. Before he was denounced by a jealous colleague, Wangenheim ran a pan-Soviet weather service. He was not an exceptional scientist: more an efficient bureaucrat. He cannot even be relied on “to give colourful descriptions of the glories of nature” before setting sail, with over a thousand others, for a secret destination, not far outside the town of Medvezhegorsk. There, some time around October 1937, a single NKVD officer dispatched the lot of them, though he had help with the cudgelling, the transport, the grave-digging. While he went to work with his Nagant pistol, others were washing blood and brains off the trucks and tarpaulins.

Right to the bitter end, Wangenheim is a boring correspondent, always banging on about the Party. “My faith in the Soviet authorities has in no way been shaken” he says. “Has Comrade Stalin received my letter?” And again: “I have battled in my heart not to allow myself to think ill of the Soviet authorities or of the leaders”. Rolin makes gold of such monotony, exploiting the degree to which French lends itself to lists and repeated figures, and his translator Ros Schwartz has rendered these into English that is not just palatable, but often thrilling and always freighted with dread.

When Wangenheim is not reassuring his wife about the Bolshevik project, he is making mosaics out of stone chippings and brick dust: meticulous little portraits of — of all people — Stalin. Rolin openly struggles to understand his subject’s motivation: “In any case, blinkeredness or pathetic cunning, there is something sinister about seeing this man, this scholar, making of his own volition the portrait of the man in whose name he is being crucified.”

That Rolin finds a mystery here is of a piece with his awkward nostalgia for the promise of the Bolshevik revolution. Hovering like a miasma over some pages (though Rolin is too smart to succumb utterly) is that hoary old meme, “the revolution betrayed”. So let us be clear: the revolution was not betrayed. The revolution panned out exactly the way it was always going to pan out, whether Stalin was at the helm or not. It is also exactly the way the French revolution panned out, and for exactly the same reason.

Both French and Socialist revolutions sought to reinvent politics to reflect the imminent unification of all branches of human knowledge, and consequently, their radical simplification. By Marx’s day this idea, under the label “scientism”, had become yawningly conventional: also wrong.

Certainly by the time of the Bolshevik revolution, scientists better than Wangenheim — physicists, most famously — knew that the universe would not brook such simplification, neither under Marx nor under any other totalising system. Rationality remains a superb tool with which to investigate the world. But as a working model of the world, guiding political action, it leads only to terror.

To understand Wangenheim’s mosaic-making, we have to look past his work, diligently centralising and simplifying his own meteorological science to the point where a jealous colleague, deprived of his sinecure, denounced him. We need to look at the human consequences of this attempt at scientific government, and particularly at what radical simplification does to the human psyche. To order and simplify life is to bureaucratise it, and to bureaucratise human beings is make them behave like machines. Rolin says Wangenheim clung to the party for the sake of his own sanity. I don’t doubt it. But to cling to any human institution, or to any such removed and fortressed individual, is the act, not of a suffering human being but of a malfunctioning machine.

At the end of his 1940 film The Great Dictator Charles Chaplin, dressed in Adolf Hitler’s motley, broke the fourth wall to declare war on the “machine men with machine minds” that were then marching roughshod across his world. Regardless of Hitler’s defeat, this was a war we assuredly lost. To be sure the bureaucratic infection, like all infections, has adapted to ensure its own survival, and it is not so virulent as it was. The pleasures of bureaucracy are more evident now; its damages, though still very real, are less evident. “Disruption” has replaced the Purge. The Twitter user has replaced the police informant.

But let us be explicit here, where Rolin has been admirably artful and quietly insidious: the pleasures of bureaucracy in both eras are exactly the same. Wangenheim’s murderers lived in a world that had been made radically simple for them. In Utopia, all you have to do is your job (though if you don’t, Utopia falls apart). These men weren’t deprived of humanity: they were relieved of it. They experienced exactly what you or I feel when the burden of life’s ambiguities is lifted of a sudden from our shoulders: contentment, bordering on joy.

A kind of “symbol knitting”

Reviewing new books by Paul Lockhart and Ian Stewart for The Spectator 

It’s odd, when you think about it, that mathematics ever got going. We have no innate genius for numbers. Drop five stones on the ground, and most of us will see five stones without counting. Six stones are a challenge. Presented with seven stones, we will have to start grouping, tallying and making patterns.

This is arithmetic, ‘a kind of “symbol knitting”’ according to the maths researcher and sometime teacher Paul Lockhart, whose Arithmetic explains how counting systems evolved to facilitate communication and trade, and ended up watering (by no very obvious route) the metaphysical gardens of mathematics.

Lockhart shamelessly (and successfully) supplements the archeological record with invented number systems of his own. His three fictitious early peoples have decided to group numbers differently: in fours, in fives, and in sevens. Now watch as they try to communicate. It’s a charming conceit.

Arithmetic is supposed to be easy, acquired through play and practice rather than through the kind of pseudo-theoretical ponderings that blighted my 1970s-era state education. Lockhart has a lot of time for Roman numerals, an effortlessly simple base-ten system which features subgroup symbols like V (5), L (50) and D (500) to smooth things along. From glorified tallying systems like this, it’s but a short leap to the abacus.

It took an eye-watering six centuries for Hindu-Arabic numbers to catch on in Europe (via Fibonacci’s Liber Abaci of 1202). For most of us, abandoning intuitive tally marks and bead positions for a set of nine exotic squiggles and a dot (the forerunner of zero) is a lot of cost for an impossibly distant benefit. ‘You can get good at it if you want to,’ says Lockhart, in a fit of under-selling, ‘but it is no big deal either way.’

It took another four centuries for calculation to become a career, as sea-going powers of the late 18th century wrestled with the problems of navigation. In an effort to improve the accuracy of their logarithmic tables, French mathematicians broke the necessary calculations down into simple steps involving only addition and subtraction, assigning each step to human ‘computers’.

What was there about navigation that involved such effortful calculation? Blame a round earth: the moment we pass from figures bounded by straight lines or flat surfaces we run slap into all the problems of continuity and the mazes of irrational numbers. Pi, the ratio of a circle’s circumference to its diameter, is ugly enough in base 10 (3.1419…). But calculate pi in any base, and it churns out numbers forever. It cannot be expressed as a fraction of any whole number. Mathematics began when practical thinkers like Archimedes decided to ignore naysayers like Zeno (whose paradoxes were meant to bury mathematics, not to praise it) and deal with nonsenses like pi and the square root of 1.

How do such monstrosities yield such sensible results? Because mathematics is magical. Deal with it.

Ian Stewart deals with it rather well in Significant Figures, his hagiographical compendium of 25 great mathematicians’ lives. It’s easy to quibble. One of the criteria for Stewart’s selection was, he tells us, diversity. Like everybody else, he wants to have written Tom Stoppard’s Arcadia, championing (if necessary, inventing) some unsung heroine to enliven a male-dominated field. So he relegates Charles Babbage to Ada King’s little helper, then repents by quoting the opinion of Babbage’s biographer Anthony Hyman (perfectly justified, so far as I know) that ‘there is not a scrap of evidence that Ada ever attempted original mathematical work’. Well, that’s fashion for you.

In general, Stewart is the least modish of writers, delivering new scholarship on ancient Chinese and Indian mathematics to supplement a well-rehearsed body of knowledge about the western tradition. A prolific writer himself, Stewart is good at identifying the audiences for mathematics at different periods. The first recognisable algebra book, by Al-Khwarizmi, written in the first half of the 9th century, was commissioned for a popular audience. Western examples of popular form include Cardano’s Book on Games of Chance, published 1663. It was the discipline’s first foray into probability.

As a subject for writers, mathematics sits somewhere between physics and classical music. Like physics, it requires that readers acquire a theoretical minimum, without which nothing will make much sense. (Unmathematical readers should not start withSignificant Figures; it is far too compressed.) At the same time, like classical music, mathematics will not stand too much radical reinterpretation, so that biography ends up playing a disconcertingly large role in the scholarship.

In his potted biographies Stewart supplements but makes no attempt to supersede Eric Temple Bell, whose history Men of Mathematics of 1937 remains canonical. This is wise: you wouldn’t remake Civilisation by ignoring Kenneth Clark. At the same time, one can’t help regretting the degree to which a Scottish-born mathematician and science fiction writer born in 1945 has had his limits set by the work of a Scottish-born mathematician and science fiction writer born in 1883. It can’t be helped. Mathematical results are not superseded. When the ancient Babylonians worked out how to solve quadratic equations, their result never became obsolete.

This is, I suspect, why both Lockhart and Stewart have each ended up writing good books about territories adjacent to the meat of mathematics. The difference is that Lockhart did this deliberately. Stewart simply ran out of room.

Stanisław Lem: The man with the future inside him

lem

From the 1950s, science fiction writer Stanisław Lem began firing out prescient explorations of our present and far beyond. His vision is proving unparalleled.
For New Scientist, 16 November 2016

“POSTED everywhere on street corners, the idiot irresponsibles twitter supersonic approval, repeating slogans, giggling, dancing…” So it goes in William Burroughs’s novel The Soft Machine (1961). Did he predict social media? If so, he joins a large and mostly deplorable crowd of lucky guessers. Did you know that in Robert Heinlein’s 1948 story Space Cadet, he invented microwave food? Do you care?

There’s more to futurology than guesswork, of course, and not all predictions are facile. Writing in the 1950s, Ray Bradbury predicted earbud headphones and elevator muzak, and foresaw the creeping eeriness of today’s media-saturated shopping mall culture. But even Bradbury’s guesses – almost everyone’s guesses, in fact – tended to exaggerate the contemporary moment. More TV! More suburbia! Videophones and cars with no need of roads. The powerful, topical visions of writers like Frederik Pohl and Arthur C. Clarke are visions of what the world would be like if the 1950s (the 1960s, the 1970s…) went on forever.

And that is why Stanisław Lem, the Polish satirist, essayist, science fiction writer and futurologist, had no time for them. “Meaningful prediction,” he wrote, “does not lie in serving up the present larded with startling improvements or revelations in lieu of the future.” He wanted more: to grasp the human adventure in all its promise, tragedy and grandeur. He devised whole new chapters to the human story, not happy endings.

And, as far as I can tell, Lem got everything – everything – right. Less than a year before Russia and the US played their game of nuclear chicken over Cuba, he nailed the rational madness of cold-war policy in his book Memoirs Found in a Bathtub (1961). And while his contemporaries were churning out dystopias in the Orwellian mould, supposing that information would be tightly controlled in the future, Lem was conjuring with the internet (which did not then exist), and imagining futures in which important facts are carried away on a flood of falsehoods, and our civic freedoms along with them. Twenty years before the term “virtual reality” appeared, Lem was already writing about its likely educational and cultural effects. He also coined a better name for it: “phantomatics”. The books on genetic engineering passing my desk for review this year have, at best, simply reframed ethical questions Lem set out in Summa Technologiae back in 1964 (though, shockingly, the book was not translated into English until 2013). He dreamed up all the usual nanotechnological fantasies, from spider silk space-elevator cables to catastrophic “grey goo”, decades before they entered the public consciousness. He wrote about the technological singularity – the idea that artificial superintelligence would spark runaway technological growth – before Gordon Moore had even had the chance to cook up his “law” about the exponential growth of computing power. Not every prediction was serious. Lem coined the phrase “Theory of Everything”, but only so he could point at it and laugh.

He was born on 12 September 1921 in Lwów, Poland (now Lviv in Ukraine). His abiding concern was the way people use reason as a white stick as they steer blindly through a world dominated by chance and accident. This perspective was acquired early, while he was being pressed up against a wall by the muzzle of a Nazi machine gun – just one of several narrow escapes. “The difference between life and death depended upon… whether one went to visit a friend at 1 o’clock or 20 minutes later,” he recalled.

Though a keen engineer and inventor – in school he dreamed up the differential gear and was disappointed to find it already existed – Lem’s true gift lay in understanding systems. His finest childhood invention was a complete state bureaucracy, with internal passports and an impenetrable central office.

He found the world he had been born into absurd enough to power his first novel (Hospital of the Transfiguration, 1955), and might never have turned to science fiction had he not needed to leap heavily into metaphor to evade the attentions of Stalin’s literary censors. He did not become really productive until 1956, when Poland enjoyed a post-Stalinist thaw, and in the 12 years following he wrote 17 books, among them Solaris (1961), the work for which he is best known by English speakers.

Solaris is the story of a team of distraught experts in orbit around an inscrutable and apparently sentient planet, trying to come to terms with its cruel gift-giving (it insists on “resurrecting” their dead). Solaris reflects Lem’s pessimistic attitude to the search for extraterrestrial intelligence. It’s not that alien intelligences aren’t out there, Lem says, because they almost certainly are. But they won’t be our sort of intelligences. In the struggle for control over their environment they may as easily have chosen to ignore communication as respond to it; they might have decided to live in a fantastical simulation rather than take their chances any longer in the physical realm; they may have solved the problems of their existence to the point at which they can dispense with intelligence entirely; they may be stoned out of their heads. And so on ad infinitum. Because the universe is so much bigger than all of us, no matter how rigorously we test our vaunted gift of reason against it, that reason is still something we made – an artefact, a crutch. As Lem made explicit in one of his last novels, Fiasco (1986), extraterrestrial versions of reason and reasonableness may look very different to our own.

Lem understood the importance of history as no other futurologist ever has. What has been learned cannot be unlearned; certain paths, once taken, cannot be retraced. Working in the chill of the cold war, Lem feared that our violent and genocidal impulses are historically constant, while our technical capacity for destruction will only grow.

Should we find a way to survive our own urge to destruction, the challenge will be to handle our success. The more complex the social machine, the more prone it will be to malfunction. In his hard-boiled postmodern detective story The Chain of Chance (1975), Lem imagines a very near future that is crossing the brink of complexity, beyond which forms of government begin to look increasingly impotent (and yes, if we’re still counting, it’s here that he makes yet another on-the-money prediction by describing the marriage of instantly accessible media and global terrorism).

Say we make it. Say we become the masters of the universe, able to shape the material world at will: what then? Eventually, our technology will take over completely from slow-moving natural selection, allowing us to re-engineer our planet and our bodies. We will no longer need to borrow from nature, and will no longer feel any need to copy it.

At the extreme limit of his futurological vision, Lem imagines us abandoning the attempt to understand our current reality in favour of building an entirely new one. Yet even then we will live in thrall to the contingencies of history and accident. In Lem’s “review” of the fictitious Professor Dobb’s book Non Serviam, Dobb, the creator, may be forced to destroy the artificial universe he has created – one full of life, beauty and intelligence – because his university can no longer afford the electricity bills. Let’s hope we’re not living in such a simulation.

Most futurologists are secret utopians: they want history to end. They want time to come to a stop; to author a happy ending. Lem was better than that. He wanted to see what was next, and what would come after that, and after that, a thousand, ten thousand years into the future. Having felt its sharp end, he knew that history was real, that the cause of problems is solutions, and that there is no perfect world, neither in our past nor in our future, assuming that we have one.

By the time he died in 2006, this acerbic, difficult, impatient writer who gave no quarter to anyone – least of all his readers – had sold close to 40 million books in more than 40 languages, and earned praise from futurologists such as Alvin Toffler of Future Shock fame, scientists from Carl Sagan to Douglas Hofstadter, and philosophers from Daniel Dennett to Nicholas Rescher.

“Our situation, I would say,” Lem once wrote, “is analogous to that of a savage who, having discovered the catapult, thought that he was already close to space travel.” Be realistic, is what this most fantastical of writers advises us. Be patient. Be as smart as you can possibly be. It’s a big world out there, and you have barely begun.