Like it or not, these are the people we need to save America

As I write, President Donald Trump has just accused former incumbent Barack Obama of having tapped his campaign phones during the 2016 election season. I doubt very much whether this will be a major issue by the time these words are read. Most likely it’ll have been overtaken by “FAKED” nuclear exchanges over the South China Sea.

Kleptocracy moves faster than ordinary politics. Across the media, opinion writers are having to learn how to think like crime reporters. It’s frightening, but undeniably exhilarating.

Old forms of civic engagement seem hardly relevant now. On April 22, scientists will gather across the world to protest the Trump administration’s de facto dismantling of the EPA, veiled threats directed at programs of vaccination, cuts in NASA’s climate-studies funding, and any number of other depredations. All very noble, to protest this outrage, and necessary in its way.  But the rest of us might just as easily marvel at the alacrity and efficiency with which groups of suddenly vulnerable people round themselves up. The spectacle of well-educated people congratulating themselves on their effortless (because sacrifice-free) “intersectionality”, while at the same time complaining about their job security, is unlikely to prove particularly edifying, least of all since the “cosmopolitanism” so necessary to science (I mean devotion to an idea or ideas bigger than the nation state) is rapidly becoming synonymous with disloyalty.     

The March for Science was conceived on a Reddit forum as recently as 20 January, yet  we already find ourselves operating at an entirely different level of discourse. Science illumines small, detailed corners of the world, but it’s the entire reality of that world that’s under threat now. We are dealing with an administration that, when not lost in the toils of its own mythomania, will quibble over what is even in plain sight.

Studying an existential crisis of this magnitude requires no scientific apparatus. Consider this classic bullet-through-the-foot statement from Trump aide Myron Ebell: “We will be ceding global leadership of climate policy to China,” Ebell said on 1 February. “I want to get rid of global climate policy, so why do I care who is in charge of it? I don’t care. They can take it as far as I’m concerned, and good luck to them.” [1]   

Well, really, who needs luck? Knowing what climate change is, what causes it, and what needs to be done about it is — among many other things — a recipe for printing money, which is why China, the world’s fastest-growing green economy, just invested $360 billion into renewable energy production. US industries either innovate to address the carbon problem, or they join the tobacco companies in the shadowlands of lobbying, litigation, and spin: not quite dead, but no longer really alive.

The default Trump position on America’s scientific institutions is that they have become blunt weapons in the service of an over-centralised state. We know what this would look like were it true (and how far it actually is from American reality) when we look at Stalin’s Russia. Genetics was banned in Russia in 1948 — its institutions destroyed, careers truncated, individuals sacked and internally exiled — because the findings of genetics flatly contradicted promising but badly flawed state-sponsored agricultural trials of new crop varieties. If new varieties could be generated at will (and genetics said they couldn’t), then the countryside could be industrialised overnight, speeding the development of the Soviet Union towards communism within a single generation. The state had ambitions for science and, centralising its efforts around a handful of top-heavy institutions, ensured that those ambitions drowned out the very findings it had paid to obtain.

When climate change-denying Republicans invoke the bogeyman of overpaid lackey climatologists working to a misguided, politically motivated programme, they’re not pointing at nothing. Indeed, they’re pointing at what happened to the largest and best-funded science base in history. The problem is that theirs is an argument from analogy. Which is to say, no argument at all. Flim-flam, if you prefer.

It hardly matters now. If exploited to the hilt by US industry (and let’s be honest, we all want to go out with a bang) Trump’s climate policies — his devotion to fossil fuels and rejection of the Paris protocols — are more than sufficient over four short years to set global temperatures on a course topping 2.5 degrees, at which point our much-maligned globalised civilisation will collapse from the sheer cost of its own insurance premiums.

Some more flim-flam while we await the End Times. So Obama was Stalin, was he? Knowledgeable in both science and in politics but unable to separate the one from the other? Then Donald Trump is Nicholas II, the last Tsar of Russia: reactionary, vain, deaf to the entreaties of his ever more carefully hand-picked advisors, until, at the last, only the Rasputin-like whispers of Steve Bannon catch his ear.

Let’s indulge this bad habit of arguing from analogy a little further, and ask this interesting question: how did Russia’s academics react against Nicholas II’s lame-duck regime? They held marches. They published pamphlets. They organised strikes. They pinned their liberal and cosmopolitan colours to their sleeves, and wrote angry letters to the papers. They achieved virtually nothing until, in 1905, they got canny. They became political. They stood up for an idea of civics rooted in the European enlightenment. They fomented a revolution. They even got the Tsar to convene a parliament, in which they were the ministers.

Ultimately, this “constitutional-democratic”  movement failed. It refused to cohere, it sought compromise where it should have fomented discord, collaborated where it should have opposed. It died from politeness. A dozen years later, its failure made Bolshevik extremism possible and the rest, as they say, is history.

History, yes. But not destiny. The people who march in the name of science on Saturday 22 are taking but the first step on what promises to be a long and frightening journey. We should not expect too much from them yet. But neither should we pull our punches. Like it or not, and certainly if Trump lasts into a second term, these people, thanks to their educations and well-thumbed passports, their urbane reflexes and all the advantages that leisure has bestowed on them, are best placed to be the champions of our by then virtually extinguished civic life.

We are going to have to teach these snowflakes how to fight.

Hot photography

Previewing an exhibition of photographs by Richard Mosse for New Scientist, 11 February 2017

Irish photographer Richard Mosse has come up with a novel way to inspire compassion for refugees. He presents them as drones might see them – as detailed heat maps, often shorn of expression, skin tone, and even clues to age and sex. Mosse’s subjects, captured in the Middle East, North Africa and Europe, don’t look back at us: the infrared camera renders their eyes as uniform black spaces.

Mosse has made a career out of repurposing photographic kit meant for military use. The images here show his subjects as seen, mostly at night, by a super-telephoto device designed for border and battlefield surveillance. Able to zoom in from 6 kilometres away, the camera anonymises them, making them strangely faceless even while their sweat, breath and sometimes blood circulation patterns are visible.

The results are almost closer to the nightmarish paintings of Hieronymus Bosch than the work of a documentary photographer. Making sense of them requires imagination and empathy: after all, this is how a smart weapon might see us.

Mosse came across his heat-mapping camera via a friend who worked on the BBC series Planet Earth. Legally classified as an advanced weapons system, the device is unwieldy and – with no user interface or handbook – difficult to use. But, working with cinematographer Trevor Tweeten, Mosse has managed to use it to make a 52-minute video. Incoming will wrap itself around visitors to the Curve Gallery at the Barbican arts centre in London from 15 February until 23 April.

D’Arcy Wentworth Thompson, the man who shaped biology and art

Biomorphic portrait of D'Arcy Thompson

For New Scientist, 1 February 2017

In a small, windowless corner of the University of Dundee, UK, Caroline Erolin of the Centre for Anatomy & Human Identification is ironing a fossilised pterodactyl.

At least, that’s what she appears to be doing. In fact, Erolin’s “iron” is a handheld 3D scanner, and her digitised animals are now being used as teaching aids worldwide. Her enthusiasm for the work (which she has to squeeze between research into medical visualisation and haptics) is palpable. She is not just bringing animals back from the dead, but helping to bring a great collection back to life.

In 1884, the biologist and classicist D’Arcy Wentworth Thompson began assembling a teaching and research museum in Dundee. An energetic philanthropist and a natural diplomat, Thompson had a broad network of friends and contacts – among them members of Dundee’s own whaling community, who provided him with extraordinary, then-unique specimens of Arctic fauna.

In 1956, the building that housed the University of Dundee’s natural history department was scheduled for demolition and Thompson’s collection, created as part of his work there, was dispersed. Scholars have been scrambling to recover its treasures ever since. Asked whether it can in fact be reassembled, Erolin laughs and gestures at the confines in which the surviving items are (rather artfully) squeezed. “It’s a question of space. We’re already sitting on an entire elephant skeleton. Where on earth would we put that?”

ADVERTISING

inRead invented by Teads
It’s largely not a genuine problem though, in part because advances in digitisation are changing the priorities of collections worldwide. Even more importantly, it is generally acknowledged that Thompson has outgrown Dundee: he belongs to the world. Together with Charles Darwin, Thompson, who died in 1948, is the most culturally influential English-speaking biologist in history.

We have one book to thank for that: On Growth and Form, first published in 1917 – an event commemorated by an exhibition, A Sketch of the Universe: Art, science and the influence of D’Arcy Thompson, at the Edinburgh City Art Centre.

“Thompson described his landmark book as all preface”

In neither the first edition nor the revised and expanded 1942 version does Thompson talk much about Darwin, and even in the 1940s he considered genetics hardly more than a distraction. Thompson was pursuing an entirely different line: the way in which physical constraints and the initial conditions of life shape the development of plants and animals.

Thompson was fascinated by tiny, single-celled shelled organisms such as foraminifera and radiolaria. He was convinced (rightly) that their wildly diverse shell shapes play no evolutionary role: they arise at random, their beauty emerging from the self-organising properties of matter, not from any biological code.

Even as geneticists like Ernst Mayr and Theodosius Dobzhansky were revealing the genetic mechanisms that constrain how living things evolve, Thompson was revealing the constraints and opportunities afforded to living things by physics and chemistry. Crudely put, genetics explains why dogs, say, look like other dogs. Thompson did something different: he glimpsed why dogs look the way they do.

Most of Thompson’s contemporaries were caught up in a genetic revolution, synthesising the seemingly incompatible demands of chromosomal genetics and Darwinian selection theory. No one ever seriously doubted Thompson’s importance – his book has always been a classic text – but at the same time, few have ever quite known what to do with him.

Portrait of D'Arcy Thompson by Darren McFarlane
Darren McFarlane, Scarus, Pomacanthus, 2012, oil on canvas. (University of Dundee Museum Services © the artist)

Thompson himself (pictured above as morphed by artist Darren McFarlane) understood the problem; he described his landmark book as “all preface”: the sketch of a territory he lacked the mathematical skill to penetrate. What the arguments in On Growth and Form really needed is a computer, and a big one at that (which makes Thompson a character who might have dropped straight out of the pages of Tom Stoppard’s play Arcadia).

Artists, on the other hand, from Henry Moore to Richard Hamilton to Eduardo Paolozzi, knew exactly what to do – and the Edinburgh exhibition combines the University of Dundee’s own collection of biomorphic, Thompsonesque art with new commissions. Several stand-out pieces are by artists who were students at Dundee’s own Duncan of Jordanstone College of Art and Design.

To its credit, Thompson’s alma mater has not been slow to exploit the way his meticulous and beautiful work straddles art and science: it supports a dedicated art-science crossover gallery called LifeSpace, as well as offering degrees in animation, medical art and medical imaging, connecting digital processes with traditional illustration. They are making the most of On Growth and Form‘s centenary, but the influence of Thompson on the university is deep and abiding.

That is as well. For all our anxious predictions about genetic engineering, for all the hype surrounding synthetic biology, and all the many hundreds of graduate design shows stuffed with “imaginary animals”, we have barely begun to explore let alone exploit the spaces Thompson’s vision revealed to us.

Read more: https://www.newscientist.com/article/2120057-darcy-wentworth-thompson-the-man-who-shaped-biology-and-art/#ixzz63gNDj1gc

Flying machines and chickens

screen-shot-2016-11-18-at-10-50-20

for New Scientist, 21 November 2016

Artist Nick Laessing has been learning more than is entirely healthy about the internal workings of motor vehicles. He has been spurred on by the myths surrounding water-powered cars, a notion first cooked up in Dallas in 1935 that has powered conspiracy theories and investment frauds ever since.

Laessing insists I climb in among the boxes and dials that half-fill the front passenger seat of his humble VW Golf. We’re in Liverpool, where his Water Gas Car is being exhibited in No Such Thing as Gravity, an art show that curator Rob La Frenais says reveals the shape of science by mapping “where the relation between data and knowledge is uncertain”.

There’s mischief here: Agnes Meyer-Brandis’s 2010 video Studies in Applied Falling makes a seamless and hard-to-spot nonsense of astronaut David Scott’s famous experiment, in which he dropped a hammer and a feather together in the airless environment of the moon.

Nevertheless, La Frenais, who used to curate for the London-based art-science organisation Arts Catalyst, is adamant that his show is not about pseudoscience: “It is about those areas where science is still a developing body of knowledge,” he explains. “It lets people ask naive questions about science and not feel embarrassed.”

Science for the future?
Laessing’s car is a case in point. No one, however well-informed, really knows whether water-gas cars have a future. Laessing’s on-board technology isn’t going to set the markets alight, but it does work, harvesting hydrogen fuel from water through solar-powered electrolysis.

Tania Candiani’s lovingly recreated 17th-century flying machine also works – up to a point. At least, she has ridden it through the hull of a jumbo jet in free fall, and lived to film the tale. What was, centuries ago, a serious technical effort becomes, in light of subsequent knowledge, a touching and amusing entertainment.

Nearby, an installation called Heirloom stands this formula on its head. Artist Gina Czarnecki and John Hunt at the University of Liverpool’s Institute of Ageing and Chronic Disease have produced an extraordinary living artwork that promises one day to become a useful technology.

Living portraits of Gina’s two daughters are being grown on glass casts from cells collected from inside their mouths. Over time, the cells will grow to the thickness of tissue paper. The surgical possibilities for custom-shaped grafts are considerable, if still far off. A correctly curved graft means a more natural fit for the client, with less scarring and less disfigurement.

Meanwhile, as we wait for the technology to improve, Czarnecki’s haunting portraits raise natural (though perhaps too obvious) questions about biological ownership and identity.

Artful answers
Sometimes, scientific advances throw up questions that only art can answer. Two artists in the show explore near-death experiences. Sarah Sparkes is interested in the psychology of the phenomenon, recreating a classic experiment in generating uncanny sensations. Push a lever, and a rod pokes you in the back. Fair enough. Now push the lever again, and the rod pokes you in the back a split-second later. An irresistible suspicion arises that you are communicating with a hidden presence.

Helen Pynor, by contrast, explores the way in which advances in resuscitation medicine have increased the frequency of near-death experiences. This has led her to make artworks that challenge the tricky notion of a “moment of death”.

Two pig hearts from an abattoir, kept alive by an artificially maintained flow of oxygenated blood, dominated her 2013 installation The Body is a Big Place. Her work displayed at FACT, The End is a Distant Memory, was inspired by a casual conversation with regeneration biologist Jochen Rink of the Max Planck Institute of Molecular Cell Biology and Genetics. During this exchange, Rink remarked that individual cells long outlive whole bodies — and that supermarket chicken would surely still contain healthy cells.

Pynor’s photographic and videographic installation runs with this notion, tracing the processes that turn a live chicken into food. Plucked chickens are dignified through portraiture, while successive images of a chicken being plucked are subtly choreographed to suggest that the animal’s life is being rewound. Once fully plucked, it resembles a fetus in an egg.

Pynor dignifies and personalises the meat on our plate without hysteria, and sends a shiver of memento mori down the back of all but the most insensitive visitor. It is the emotional highlight of a show that, though driven by the high purpose of getting non-scientists thinking scientifically, will probably be more remembered for its cleverness and its wit.

Read more: https://www.newscientist.com/article/2113412-flying-machines-and-chickens-the-art-of-thinking-about-science/#ixzz63gNvkEEM

Shakespeare and the machines

rexfeatures_7440787k

Here’s a review of the RSC’s production of The Tempest with Simon Russell Beale as Prospero. Through a combination of editorial tightening and big claims (I’m saying Shakespeare’s last play was a masque, not a drama) I make it appear here as though two fully grown polar bears once starred in its production. Please no one correct me: with a following wind this nonsense could become canonical.

for New Scientist, 21 November 2016 

It should come as no surprise that the Royal Shakespeare Company’s projector and motion-capture-enhanced new production of Shakespeare’s last play is a triumph. For one thing, The Tempest is actually not a play: it is a masque, an almost-forgotten dramatic form that was contrived to blow millions (literally, if you convert into today’s currency) on effects-heavy entertainment meant for royalty and a few favoured hangers-on.

James I got his two fully grown pet polar bears involved in one memorable production; modern audiences get actor Mark Quartley as Ariel in a motion-capture extravaganza. The production uses an impressive array of sculpted net curtains as screens on which the serviceable sprite, though a real-enough presence on stage, also flies, dances, finds himself trapped in a tree, transforms into a harpy, and more or less realises every passing fancy about him that Shakespeare ever thought to put to pen.

There is no attempt to hide Quartley, who is also on stage while rigged up in motion-capture kit, rather like those puppeteers who don’t attempt to hide themselves during their performance.

The show is the fruit of a two-year collaboration between the Royal Shakespeare Company (RSC), IT company Intel and The Imaginarium Studios – a performance-capture house co-founded by actor Andy Serkis, who played Gollum in The Lord of the Rings film series.

The results are impressive but not seamless. When Quartley dances, Ariel flies. When he speaks or sings, Ariel’s bad lip-synching suggests the buggier corners of YouTube. Never mind: there are 200,000 files running at once to bring this illusion to life, and anyone who knows anything about the technology will be rightly astounded that the sprite responds in real time at all. Much of the two-year collaboration was spent turning a post-production technology into something robust enough for stage use. It is a tremendous, if hidden, achievement.

More seriously – though this is hardly a criticism – The Tempest is the first outing for a form of theatre that is still looking for its grammar. The performance’s game-engine-driven Ariel is shown from a floating, swooping viewpoint, sometimes from above, sometimes from below, sometimes crash-zooming towards us and in the next instant hurtling away – to not much emotional effect, it has to be said.

No one’s doing anything wrong here: we simply don’t know how to read mood into these images, any more than we knew how, at the beginning of cinema, to read the cuts between images. Stephen Brimson Lewis is the RSC’s director of design and his throw-everything-at-it approach here is exactly the right one. If The Tempest is a mess at times, it’s a glorious mess, and one from which future productions can learn.

Simon Russell Beale is Prospero, gamely preparing to be upstaged in journalistic copy, but never, ever on stage. Beale’s is a moving, mesmerising performance, full of rage and danger, though his nice line in bathos keeps him anchored in a show that’s played predominantly for comedy, manufactured stage business and some groan-inducing visual puns.

It’s hard to imagine actual plays benefiting from this up-to-the-minute son et lumière. But The Tempest, and the masque form as a whole, is far closer to opera than to drama, and that, I suspect, is where this technology will find a home.

Meanwhile – and I can’t quite believe I’m saying this – budding playwrights might seriously consider writing masques.

Stanisław Lem: The man with the future inside him

lem

From the 1950s, science fiction writer Stanisław Lem began firing out prescient explorations of our present and far beyond. His vision is proving unparalleled.
For New Scientist, 16 November 2016

“POSTED everywhere on street corners, the idiot irresponsibles twitter supersonic approval, repeating slogans, giggling, dancing…” So it goes in William Burroughs’s novel The Soft Machine (1961). Did he predict social media? If so, he joins a large and mostly deplorable crowd of lucky guessers. Did you know that in Robert Heinlein’s 1948 story Space Cadet, he invented microwave food? Do you care?

There’s more to futurology than guesswork, of course, and not all predictions are facile. Writing in the 1950s, Ray Bradbury predicted earbud headphones and elevator muzak, and foresaw the creeping eeriness of today’s media-saturated shopping mall culture. But even Bradbury’s guesses – almost everyone’s guesses, in fact – tended to exaggerate the contemporary moment. More TV! More suburbia! Videophones and cars with no need of roads. The powerful, topical visions of writers like Frederik Pohl and Arthur C. Clarke are visions of what the world would be like if the 1950s (the 1960s, the 1970s…) went on forever.

And that is why Stanisław Lem, the Polish satirist, essayist, science fiction writer and futurologist, had no time for them. “Meaningful prediction,” he wrote, “does not lie in serving up the present larded with startling improvements or revelations in lieu of the future.” He wanted more: to grasp the human adventure in all its promise, tragedy and grandeur. He devised whole new chapters to the human story, not happy endings.

And, as far as I can tell, Lem got everything – everything – right. Less than a year before Russia and the US played their game of nuclear chicken over Cuba, he nailed the rational madness of cold-war policy in his book Memoirs Found in a Bathtub (1961). And while his contemporaries were churning out dystopias in the Orwellian mould, supposing that information would be tightly controlled in the future, Lem was conjuring with the internet (which did not then exist), and imagining futures in which important facts are carried away on a flood of falsehoods, and our civic freedoms along with them. Twenty years before the term “virtual reality” appeared, Lem was already writing about its likely educational and cultural effects. He also coined a better name for it: “phantomatics”. The books on genetic engineering passing my desk for review this year have, at best, simply reframed ethical questions Lem set out in Summa Technologiae back in 1964 (though, shockingly, the book was not translated into English until 2013). He dreamed up all the usual nanotechnological fantasies, from spider silk space-elevator cables to catastrophic “grey goo”, decades before they entered the public consciousness. He wrote about the technological singularity – the idea that artificial superintelligence would spark runaway technological growth – before Gordon Moore had even had the chance to cook up his “law” about the exponential growth of computing power. Not every prediction was serious. Lem coined the phrase “Theory of Everything”, but only so he could point at it and laugh.

He was born on 12 September 1921 in Lwów, Poland (now Lviv in Ukraine). His abiding concern was the way people use reason as a white stick as they steer blindly through a world dominated by chance and accident. This perspective was acquired early, while he was being pressed up against a wall by the muzzle of a Nazi machine gun – just one of several narrow escapes. “The difference between life and death depended upon… whether one went to visit a friend at 1 o’clock or 20 minutes later,” he recalled.

Though a keen engineer and inventor – in school he dreamed up the differential gear and was disappointed to find it already existed – Lem’s true gift lay in understanding systems. His finest childhood invention was a complete state bureaucracy, with internal passports and an impenetrable central office.

He found the world he had been born into absurd enough to power his first novel (Hospital of the Transfiguration, 1955), and might never have turned to science fiction had he not needed to leap heavily into metaphor to evade the attentions of Stalin’s literary censors. He did not become really productive until 1956, when Poland enjoyed a post-Stalinist thaw, and in the 12 years following he wrote 17 books, among them Solaris (1961), the work for which he is best known by English speakers.

Solaris is the story of a team of distraught experts in orbit around an inscrutable and apparently sentient planet, trying to come to terms with its cruel gift-giving (it insists on “resurrecting” their dead). Solaris reflects Lem’s pessimistic attitude to the search for extraterrestrial intelligence. It’s not that alien intelligences aren’t out there, Lem says, because they almost certainly are. But they won’t be our sort of intelligences. In the struggle for control over their environment they may as easily have chosen to ignore communication as respond to it; they might have decided to live in a fantastical simulation rather than take their chances any longer in the physical realm; they may have solved the problems of their existence to the point at which they can dispense with intelligence entirely; they may be stoned out of their heads. And so on ad infinitum. Because the universe is so much bigger than all of us, no matter how rigorously we test our vaunted gift of reason against it, that reason is still something we made – an artefact, a crutch. As Lem made explicit in one of his last novels, Fiasco (1986), extraterrestrial versions of reason and reasonableness may look very different to our own.

Lem understood the importance of history as no other futurologist ever has. What has been learned cannot be unlearned; certain paths, once taken, cannot be retraced. Working in the chill of the cold war, Lem feared that our violent and genocidal impulses are historically constant, while our technical capacity for destruction will only grow.

Should we find a way to survive our own urge to destruction, the challenge will be to handle our success. The more complex the social machine, the more prone it will be to malfunction. In his hard-boiled postmodern detective story The Chain of Chance (1975), Lem imagines a very near future that is crossing the brink of complexity, beyond which forms of government begin to look increasingly impotent (and yes, if we’re still counting, it’s here that he makes yet another on-the-money prediction by describing the marriage of instantly accessible media and global terrorism).

Say we make it. Say we become the masters of the universe, able to shape the material world at will: what then? Eventually, our technology will take over completely from slow-moving natural selection, allowing us to re-engineer our planet and our bodies. We will no longer need to borrow from nature, and will no longer feel any need to copy it.

At the extreme limit of his futurological vision, Lem imagines us abandoning the attempt to understand our current reality in favour of building an entirely new one. Yet even then we will live in thrall to the contingencies of history and accident. In Lem’s “review” of the fictitious Professor Dobb’s book Non Serviam, Dobb, the creator, may be forced to destroy the artificial universe he has created – one full of life, beauty and intelligence – because his university can no longer afford the electricity bills. Let’s hope we’re not living in such a simulation.

Most futurologists are secret utopians: they want history to end. They want time to come to a stop; to author a happy ending. Lem was better than that. He wanted to see what was next, and what would come after that, and after that, a thousand, ten thousand years into the future. Having felt its sharp end, he knew that history was real, that the cause of problems is solutions, and that there is no perfect world, neither in our past nor in our future, assuming that we have one.

By the time he died in 2006, this acerbic, difficult, impatient writer who gave no quarter to anyone – least of all his readers – had sold close to 40 million books in more than 40 languages, and earned praise from futurologists such as Alvin Toffler of Future Shock fame, scientists from Carl Sagan to Douglas Hofstadter, and philosophers from Daniel Dennett to Nicholas Rescher.

“Our situation, I would say,” Lem once wrote, “is analogous to that of a savage who, having discovered the catapult, thought that he was already close to space travel.” Be realistic, is what this most fantastical of writers advises us. Be patient. Be as smart as you can possibly be. It’s a big world out there, and you have barely begun.

 

Fitbitters of the world, unite!

2560

for The Guardian, 2 November 2016

At this year’s Ars Electronica festival in Linz, Austria, I happened upon a robot made of hacked and 3D-printed surgical components that can perform DIY keyhole surgery. Its builder, the Dutch artist Frank Kolkman, was inspired by YouTube videos in which impoverished hackers and makers, largely without insurance, share medical tips and tricks. No money for bridgework? Try Sugru moldable glue.

A revolution is afoot in medicine. And like all revolutions, it is composed of equal parts inspirational advance and jaw-dropping social catastrophe. On the plus side, there are the health and fitness promises inherent in the artefacts of a personal health surveillance industry – all those Jawbones and Fitbits and Scanadu Scouts, iPhones and Apple Watches – that promises to top $50bn in annual sales by 2018. The devices aren’t particularly accurate (yet), and more than half of them end up at the bottom of a drawer after six months. Still, DIY devices are already spotting medical problems before their users do, raising the likelihood of a future in which illness and medical conditions are treated long before the patient gets sick.

On the minus side, there is Kolkman’s terrifyingly practical robot, and its promise of a future in which DIY medicine is the only medicine the ordinary individual can afford. The sunny west coast self-reliant rhetoric of the “making” and “hacking” and “quantified self” movements disguise the disturbing assumption that they can be a substitute for civic life.

We have been here before. Not much more than a century ago the Russian empire was a ramshackle agglomeration of colonies, held together by military force and hooch. There were no institutions for reformers to reform: no councils, no unions, no guilds, no professional bodies, few schools, few hospitals worth the name; in many places, no roads.

The responsibility for improvement and reform inevitably fell on the individual. Utopia was a personal quest in Nikolay Chernyshevsky’s novel What Is to Be Done? – according to Lenin, “the greatest and most talented representation of socialism before Marx”. Even more hysterical, Tolstoy’s The Kreutzer Sonata prefers the prospect of human annihilation to its current unreformed (read: lustful) condition. Outside the library and drawing room, pre-revolutionary Russia floundered in a sea of cults, from machinism and robotism to primitive reticence, antiverbalism, nudism, social militarism, revolutionary sublimation, suicidalism … One outfit called itself the Nothing, its members neither writing, reading or speaking.

Into this stew came the railways and the clock and all of a sudden self-regulation became easy and practical. In Leningrad in 1923, a theatre critic, Platon Kerzhentsev, founded the League of Time, in order to promote time-efficiency. Eight hundred “time cells” were set up in the army, factories, government departments and schools. The “Timists” carried “chronocards” in order to monitor time-wasting, wasted motion and lengthy speeches. Without watches, they tried to guess the passage of minutes and hours, and were awarded medals for spontaneous “time discipline”. They kept meticulous diaries of their every daily action. Lenin had the league’s personal productivity posters pasted up on the wall behind his desk.

“Man will finally begin to really harmonise himself,” Leon Trotsky prophesied in 1922: “He will put forward the task to introduce into the movement of his own organs – during work, walk, play – the highest precision, expediency, economy, and thus beauty.”

The poet Alexei Gastev – whose forbidding toothbrush-moustache and crew cut concealed a lot of mischief – took Trotsky at his word. He built a “social-engineering machine”. This giant structure of pulleys, cogs and weights was a thing of no fathomable use whatsoever, yet Gastev insisted that a few hours’ workout would turn you into a new kind of human being. He rolled these machines out across the young Soviet Union, as a sort of mascot for his Central Insitute of Labour which, with Lenin’s personal backing, taught peasant workers how to behave in modern factories. A class at the Central Institute of Labour was a sort of drill practice: pupils stood before their benches in set positions, with places marked out for their feet. They rehearsed separate elements of each task, then combined them in a finished performance. (Judging by the sheer popularity of the classes, and the speed of the institute’s expansion, the classes must have been quite enjoyable.)

Joining Gastev at the beginning of his career was the young Nikolai Bernstein, whose childhood spent assembling radios and building models of steam engines and bridges, set him in good stead when it came to mechanically registering the movements of the human body. He developed a high-speed camera called the kymocyclograph. The shutter, a round plate with holes in it, rotated before the camera lens, so that the photographic plate would record multiple images, each exposed a fraction of a second after its neighbour. (Motion-capture cinema, VR – and all the other technologies that keep Gollum actor Andy Serkis on the talkshow circuit – begin here.)

By the end of these studies, Bernstein had good evidence that motion could not be a simple matter of Pavlovian “reflexes”. His more nuanced model of motor responses amounted to a fully fledged theory of cybernetics, decades before Norbert Wiener coined the term in 1948.

The early Soviet Union gathered unprecedented amounts of data on human motion, fitness, behaviour and genetics, making it a world leader in the field. A new kind of human being – healthy, fit, psychologically integrated and free of heritable disease – seemed, for a few heady days in the 1920s, an achievable aspiration.

Then, in 1927, a miner called Alexey Grigoryevich Stakhanov went to work in a mine. He was no superman, but he was energetic and intelligent, and he could see ways of organising his work crew to increase the amount of coal they were able to dig in a single shift. On 31 August 1935, it was reported that he had mined a record 102 tonnes of coal in four hours and 45 minutes – 14 times his quota. Barely three weeks later, on 19 September, Stakhanov and his crew more than doubled this record.

Others rushed to follow Stakhanov’s example, and newspapers and newsreels across the Soviet Union celebrated their efforts. In Gorky, a worker in a car factory forged nearly 1,000 crankshafts in a single shift. A shoemaker in Leningrad turned out 1,400 pairs of shoes in a day. On a collective farm, three female “Stakhanovites” proved they could cut sugar beet faster than was thought humanly possible. Such workers were awarded higher pay, better food, access to luxury goods and improved accommodation. Stakhanovism soon became a mass movement. “In factories and even in scientific institutes,” wrote the American psychologist Richard Schultz, “the workers’ names may be posted on a bulletin board opposite a bird, deer, rabbit, tortoise or snail relative to the speed with which they turn out their work. A great deal of prestige is attached to the ‘shock brigade’ worker.”

For as long as human beings labour for others, their lot will improve only so far as their productivity rises. Investment beyond this point makes no sense. The Soviet Union of the 1920s was an impoverished state dotted with institutes of labour, health and maternity clinics, mental health services, housing offices and countless censuses. Coming to power at the end of the decade, Joseph Stalin replaced all this social engineering with, well, engineering. Magnitostroi, which is still the largest steelworks in the world, housed its workers in tents downwind of the chimneys. The construction of the White Sea Canal cost 12,000 lives – around a 10th of the workforce.

Drunk as we are on the illusion of personal control, we should remember that data trickles uphill toward the powerful, because they are the ones who can afford to exploit it. Today, for every worried-yet-well twentysomething fiddling with his Fitbit, there is a worker being cajoled by their employer into taking a medical test. The tests are aggregated and anonymised, and besides, the company is giving the worker a cut of the insurance savings the test will make. So where’s the harm?

Well, for a start, anonymising data is incredibly hard to do. The bigger the datapool, the easier it is to triangulate data sets and home in on an individual. And while people can get thrown in jail for this sort of thing, algorithms are a lot harder to police. Has the computer said “no” to your mortgage application? Well, sorry, but there may simply be no human to blame: the machine has figured things out on its own.

An even bigger worry is the way that, in our smartphone-enabled and meta-data-enriched world, complete knowledge of human affairs is becoming increasingly possible, making redundant the entire gamble of insurance. At that point the scope for individual self-determination shrinks to zero and we are living in the world of Andrew Niccol’s excellent 1997 film Gattaca.

Unregulated wellness programmes are begging to be used as tools of surveillance, and that’s not because anybody’s actually doing anything wrong. It’s because we have taken control of our own data, while at the same time forgetting that data ultimtely belongs to whoever can make the most use of it.

And it need not even be a problem, unless the class in power decide to replace social engineering with, well, engineering, health services with “making” and “hacking”, and civic societies with a desert, littered with the grinning skulls of people who aspired to west-coast “radical self-reliance” – and failed.

An enormous shape-shifting artwork – run by bacteria

anywhen_tate_its_nice_that_4

for New Scientist, 19 October 2016

IN STANISLAW LEM’S bitterly utopian novel Return from the Stars, astronaut Hal Bregg comes back to Earth from a 10-year mission to find that 127 years have passed during his absence. The world that greets him is very different from the one he remembers. For one thing, its architecture absolutely refuses to stay put. Platforms slide past and around each other, walls and columns spring up out of nowhere, or fall precipitately away: the solid environment has liquefied. He worries – and is right to worry – that he will never find his feet in this new place.

The Paris-based artist Philippe Parreno is kinder than Lem. Visitors to his vast installation at London’s Tate Modern, Anywhen, get a carpet to lie on while the vast Turbine Hall shimmies and pulses around them. Let there be no doubt here: Parreno’s awful grey machine is triumphally futuristic, an interior so smart it has outgrown any need for occupants. Anywhen is thunderous, sulphurous, awful in its full archaic sense.

Visitors find themselves in a sort of aquarium, in which sound and light obey a claustrophobic new physics. Speakers descend from and ascend to the ceiling, relaying captured outside noise from nearby teenagers, a fragment of song, a passing aeroplane. Banks of lights flash. In a sudden hiatus, bits of colour drop down from somewhere on to a giant mobile screen and float off. Some are murky projections, but there are solid objects, too, in the shape inflatable fish. These seem a lot more at home in this shifting space than we do. Huge, white, architectonic panels reconfigure the dimensions of the gallery, moving up and down with more than random malevolence. Is this malevolence an illusion? Of course, but it’s an utterly convincing one – so much so that one wonders what the artist means by it.

Alas, Parreno is not here to answer. He has, it seems, ceded control of his installation to a colony of yeast, fed and watered in a small lab visible in an out-of-the-way corner. Changes in the colony’s temperature, growth and movement are variables in a biocomputed algorithm that will enrich the installation’s behaviour during its six-month run. It still remains to be seen whether those initial conditions are rich and complex enough to generate a significant creative work.

Right now, as you lie there, hands scrabbling for purchase on the thin carpet, it seems as if the Turbine Hall has been invested with a terrible, alien intentionality. Is this another illusion? It must be. And yet, how can we be sure?

The 17th-century German philosopher Gottfried Leibniz once came up with a thrilling but flawed argument for the existence of God. In one of his best-known works, Monadology, Leibniz invites readers to imagine that they are visiting “a machine whose structure makes it think, sense, and have perceptions”. There would be plenty to see: innumerable cogs, wheels, belts and gears.

But that, says Leibniz, is precisely the problem – “we will find only parts that push one another, and we will never find anything to explain a perception”. The same issue arises when we explore the brain: no amount of mapping, no amount of analogy, brings us any closer to the subjective “is-ness” of conscious experience.

Leibniz used his thinking mill to assert that the world is more than material, and that thinking must occur on another (divine) plane of existence. He was a glass-half-full sort of thinker, whose rambunctious belief in the essential goodness of the universe – all is for the best in the “best of all possible worlds” – drove an exasperated Voltaire to pen his savage satire Candide.

Anywhen is Leibniz’s mill made flesh in glass, wire and panelling. Lying on the grey Turbine Hall carpet, I couldn’t help but wonder with Voltaire how on earth Leibniz took comfort from his own story. Something is using the Turbine Hall to think with, but we can bet the farm it is not God. Imagine wandering into the toils of some vast, cool and unsympathetic intellect. Imagine the Martian has landed…

Just how much does the world follow laws?

zebra

How the Zebra Got its Stripes and Other Darwinian Just So Stories by Léo Grasset
The Serengeti Rules: The quest to discover how life works and why it matters by Sean B. Carroll
Lysenko’s Ghost: Epigenetics and Russia by Loren Graham
The Great Derangement: Climate change and the unthinkable by Amitav Ghosh
reviewed for New Scientist, 15 October 2016

JUST how much does the world follow laws? The human mind, it seems, may not be the ideal toolkit with which to craft an answer. To understand the world at all, we have to predict likely events and so we have a lot invested in spotting rules, even when they are not really there.

Such demands have also shaped more specialised parts of culture. The history of the sciences is one of constant struggle between the accumulation of observations and their abstraction into natural laws. The temptation (especially for physicists) is to assume these laws are real: a bedrock underpinning the messy, observable world. Life scientists, on the other hand, can afford no such assumption. Their field is constantly on the move, a plaything of time and historical contingency. If there is a lawfulness to living things, few plants and animals seem to be aware of it.

Consider, for example, the charming “just so” stories in French biologist and YouTuber Léo Grasset’s book of short essays, How the Zebra Got its Stripes. Now and again Grasset finds order and coherence in the natural world. His cost-benefit analysis of how animal communities make decisions, contrasting “autocracy” and “democracy”, is a fine example of lawfulness in action.

But Grasset is also sharply aware of those points where the cause-and-effect logic of scientific description cannot show the whole picture. There are, for instance, four really good ways of explaining how the zebra got its stripes, and those stripes arose probably for all those reasons, along with a couple of dozen others whose mechanisms are lost to evolutionary history.

And Grasset has even more fun describing the occasions when, frankly, nature goes nuts. Take the female hyena, for example, which has to give birth through a “pseudo-penis”. As a result, 15 per cent of mothers die after their first labour and 60 per cent of cubs die at birth. If this were a “just so” story, it would be a decidedly off-colour one.

The tussle between observation and abstraction in biology has a fascinating, fraught and sometimes violent history. In Europe at the birth of the 20th century, biology was still a descriptive science. Life presented, German molecular biologist Gunther Stent observed, “a near infinitude of particulars which have to be sorted out case by case”. Purely descriptive approaches had exhausted their usefulness and new, experimental approaches were developed: genetics, cytology, protozoology, hydrobiology, endocrinology, experimental embryology – even animal psychology. And with the elucidation of underlying biological process came the illusion of control.

In 1917, even as Vladimir Lenin was preparing to seize power in Russia, the botanist Nikolai Vavilov was lecturing to his class at the Saratov Agricultural Institute, outlining the task before them as “the planned and rational utilisation of the plant resources of the terrestrial globe”.

Predicting that the young science of genetics would give the next generation the ability “to sculpt organic forms at will”, Vavilov asserted that “biological synthesis is becoming as much a reality as chemical”.

The consequences of this kind of boosterism are laid bare in Lysenko’s Ghost by the veteran historian of Soviet science Loren Graham. He reminds us what happened when the tentatively defined scientific “laws” of plant physiology were wielded as policy instruments by a desperate and resource-strapped government.

Within the Soviet Union, dogmatic views on agrobiology led to disastrous agricultural reforms, and no amount of modern, politically motivated revisionism (the especial target of Graham’s book) can make those efforts seem more rational, or their aftermath less catastrophic.

In modern times, thankfully, a naive belief in nature’s lawfulness, reflected in lazy and increasingly outmoded expressions such as “the balance of nature”, is giving way to a more nuanced, self-aware, even tragic view of the living world. The Serengeti Rules, Sean B. Carroll’s otherwise triumphant account of how physiology and ecology turned out to share some of the same mathematics, does not shy away from the fact that the “rules” he talks about are really just arguments from analogy.

“If there is a lawfulness to living things, few plants and animals seem to be aware of it”
Some notable conservation triumphs have led from the discovery that “just as there are molecular rules that regulate the numbers of different kinds of molecules and cells in the body, there are ecological rules that regulate the numbers and kinds of animals and plants in a given place”.

For example, in Gorongosa National Park, Mozambique, in 2000, there were fewer than 1000 elephants, hippos, wildebeest, waterbuck, zebras, eland, buffalo, hartebeest and sable antelopes combined. Today, with the reintroduction of key predators, there are almost 40,000 animals, including 535 elephants and 436 hippos. And several of the populations are increasing by more than 20 per cent a year.

But Carroll is understandably flummoxed when it comes to explaining how those rules might apply to us. “How can we possibly hope that 7 billion people, in more than 190 countries, rich and poor, with so many different political and religious beliefs, might begin to act in ways for the long-term good of everyone?” he asks. How indeed: humans’ capacity for cultural transmission renders every Serengeti rule moot, along with the Serengeti itself – and a “law of nature” that does not include its dominant species is not really a law at all.

Of course, it is not just the sciences that have laws: the humanities and the arts do too. In The Great Derangement, a book that began as four lectures presented at the University of Chicago last year, the novelist Amitav Ghosh considers the laws of his own practice. The vast majority of novels, he explains, are realistic. In other words, the novel arose to reflect the kind of regularised life that gave you time to read novels – a regularity achieved through the availability of reliable, cheap energy: first, coal and steam, and later, oil.

No wonder, then, that “in the literary imagination climate change was somehow akin to extraterrestrials or interplanetary travel”. Ghosh is keenly aware of and impressively well informed about climate change: in 1978, he was nearly killed in an unprecedentedly ferocious tornado that ripped through northern Delhi, leaving 30 dead and 700 injured. Yet he has never been able to work this story into his “realist” fiction. His hands are tied: he is trapped in “the grid of literary forms and conventions that came to shape the narrative imagination in precisely that period when the accumulation of carbon in the atmosphere was rewriting the destiny of the Earth”.

The exciting and frightening thing about Ghosh’s argument is how he traces the novel’s narrow compass back to popular and influential scientific ideas – ideas that championed uniform and gradual processes over cataclysms and catastrophes.

One big complaint about science – that it kills wonder – is the same criticism Ghosh levels at the novel: that it bequeaths us “a world of few surprises, fewer adventures, and no miracles at all”. Lawfulness in biology is rather like realism in fiction: it is a convention so useful that we forget that it is a convention.

But, if anthropogenic climate change and the gathering sixth mass extinction event have taught us anything, it is that the world is wilder than the laws we are used to would predict. Indeed, if the world really were in a novel – or even in a book of popular science – no one would believe it.