“A wonderful moral substitute for war”

Reading Oliver Morton’s The Moon and Robert Stone and Alan Adres’s Chasing the Moon for The Telegraph, 18 May 2019

I have Arthur to thank for my earliest memory: being woken and carried into the living room on 20 July 1969 to see Neil Armstrong set foot on the moon.

Arthur is a satellite dish, part of the Goonhilly Earth Satellite Station in Cornwall. It carried the first ever transatlantic TV pictures from the USA to Europe. And now, in a fit of nostalgia, I am trying to build a cardboard model of the thing. The anniversary kit I bought comes with a credit-card sized Raspberry Pi computer that will cause a little red light to blink at the centre of the dish, every time the International Space Station flies overhead.

The geosychronous-satellite network that Arthur Clarke envisioned in 1945 came into being at the same time as men landed on the Moon. Intelsat III F-3 was moved into position over the Indian Ocean a few days before Apollo 11’s launch, completing the the world’s first geostationary-satellite network. The Space Race has bequeathed us a world steeped in fractured televisual reflections of itself.

Of Apollo itself, though, what actually remains? The Columbia capsule is touring the United States: it’s at Seattle’s Museum of Flight for this year’s fiftieth anniversary. And Apollo’s Mission Control Center in Houston is getting a makeover, its flight control consoles refurbished, its trash cans, book cases, ashtrays and orange polyester seat cushions all restored.

On the Moon there are some flags; some experiments, mostly expired; an abandoned car.

In space, where it matters, there’s nothing. The intention had been to build moon-going craft in orbit. This would have involved building a space station first. In the end, spooked by a spate of Soviet launches, NASA decided to cut to the chase, sending two small spacecraft up on a single rocket. One got three astronauts to the moon. The other, a tiny landing bug (standing room only) dropped two of them onto the lunar surface and puffed them back up into lunar orbit, where they rejoined the command module and headed home. It was an audacious, dangerous and triumphant mission — but it left nothing useful or reuseable behind.

In The Moon: A history for the future, science writer Oliver Morton observes that without that peculiar lunar orbital rendezvous plan, Apollo would at least have left some lasting infrastructure in orbit to pique someone’s ambition. As it was, “Every Apollo mission would be a single shot. Once they were over, it would be in terms of hardware — even, to a degree, in terms of expertise — as if they had never happened.”

Morton and I belong to the generation sometimes dubbed Apollo’s orphans. We grew up (rightly) dazzled by Apollo’s achievement. It left us, however, with the unshakable (and wrong) belief that our enthusiasm was common, something to do with what we were taught to call humanity’s “outward urge”. The refrain was constant: how in people there was this inborn desire to leave their familiar surroundings and explore strange new worlds.

Nonsense. Over a century elapsed between Columbus’s initial voyage and the first permanent English settlements. One of the more surprising findings of recent researches into the human genome is that, left to their own devices, people hardly move more than a few weeks’ walking distance from where they were born.

This urge, that felt so visceral, so essential to one’s idea of oneself: how could it possibly turn out to be the psychic artefact of a passing political moment?

Documentary makers Robert Stone and Alan Andres answer that particular question in Chasing the Moon, a tie in to their forthcoming series on PBS. It’s a comprehensive account of the Apollo project, and sends down deep roots: to the cosmist speculations of fin de siecle Russia, the individualist eccentricities of Germanys’ Verein fur Raumschiffart (Space Travel Society), and the deceptively chummy brilliance of the British Interplanetary Society, who used to meet in the pub.

The strength of Chasing the Moon lies not in any startling new information it divulges (that boat sailed long ago) but in the connections it makes, and the perspectives it brings to bear. It is surprising to find the New York Times declaring, shortly after the Bay of Pigs fiasco, that Kennedy isn’t nearly as interested in building a space programme as he should be. (“So far, apparently, no one has been able to persuade President Kennedy of the tremendous political, psychological, and prestige importance, entirely apart from the scientific and military results, of an impressive space achievement.”) And it is worthwhile to be reminded that, less than a month after his big announcement, Kennedy was trying to persuade Khrushchev to collaborate on the Apollo project, and that he approached the Soviets with the idea a second time, just days before his assassination in Dallas.

For Kennedy, Apollo was a strategic project, “a wonderful moral substitute for war ” (to slightly misapply Ray Bradbury’s phrase), and all to do with manned missions. NASA administrator James Webb, on the other hand, was a true believer. He could see no end to the good big organised government projects could achieve by way of education and science and civil development. In his modesty and dedication, Webb resembled no-one so much as the first tranche of bureaucrat-scientists in the Soviet Union. He never featured on a single magazine cover, and during his entire tenure he attended only one piloted launch from Cape Kennedy. (“I had a job to do in Washington,” he explained.)

The two men worked well enough together, their priorities dovetailing neatly in the role NASA took in promoting the Civil Rights Act and the Voting Rights Act and the government’s equal opportunities program. (NASA’s Saturn V designer, the former Nazi rocket scientist Wernher Von Braun, became an unlikely and very active campaigner, the New York Times naming him “one of the most outspoken spokesmen for racial moderation in the South.”) But progress was achingly slow.

At its height, the Apollo programme employed around two per cent of the US workforce and swallowed four per cent of its GDP. It was never going to be agile enough, or quotidian enough, to achieve much in the area of effecting political change. There were genuine attempts to recruit and train a black pilot for the astronaut programme. But comedian Dick Gregory had the measure of this effort: “A lot of people was happy that they had the first Negro astronaut, Well, I’ll be honest with you, not myself. I was kind of hoping we’d get a Negro airline pilot first.”

The big social change the Apollo program did usher in was television. (Did you know that failing to broadcast the colour transmissions from Apollo 11 proved so embarrassing to the apartheid government in South Africa that they afterwards created a national television service?)

But the moon has always been a darling of the film business. Never mind George Melie’s Trip to the Moon. How about Fritz Lang ordering a real rocket launch for the premiere of Frau im Mond? This was the film that followed Metropolis, and Lang roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre tall liquid-propellant rocket came to nought, Oberth set about building one eleven metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily they ran out of money.

The Verein für Raumschiffahrt was founded by men who had acted as scientific consultants on Frau im Mond. Von Braun became one of their number, before he was whisked away by the Nazis to build rockets for the war effort. Without Braun, the VfR grew nuttier by the year. Oberth, who worked for a time in the US after the war, went the same way, his whole conversation swallowed by UFOs and extraterrestrials and glimpses of Atlantis. When he went back to Germany, no-one was very sorry to see him go.

What is it about dreaming of new worlds that encourages the loner in us, the mooncalf, the cave-dweller, wedded to ascetism, always shying from the light?

After the first Moon landing, the philosopher (and sometime Nazi supporter) Martin Heidegger said in interview, “I at any rate was frightened when I saw pictures coming from the moon to the earth… The uprooting of man has already taken place. The only thing we have left is purely technological relationships. This is no longer the earth on which man lives.”

Heidegger’s worries need a little unpacking, and for that we turn to Morton’s cool, melancholy The Moon: A History for the Future. Where Stone and Anders collate and interpret, Morton contemplates and introspects. Stone and Anders are no stylists. Morton’s flights of informed fancy include a geological formation story for the moon that Von Trier’s film Melancholy cannot rival for spectacle and sentiment.

Stone and Anders stand with Walter Cronkite whose puzzled response to young people’s opposition to Apollo — “How can anybody turn off from a world like this?” — stands as an epitaph for Apollo’s orphans everywhere. Morton, by contrast, does understand why it’s proved so easy for us to switch off from the Moon. At any rate he has some good ideas.

Gertrude Stein, never a fan of Oakland, once wrote of the place, “There is no there there.” If Morton’s right she should have tried the Moon, a place whose details “mostly make no sense.”

“The landscape,” Morton explains, “may have features that move one into another, slopes that become plains, ridges that roll back, but they do not have stories in the way a river’s valley does. It is, after all, just the work of impacts. The Moon’s timescape has no flow; just punctuation.”

The Moon is Heidegger’s nightmare realised. It can never be a world of experience. It can only be a physical environment to be coped with technologically. It’s dumb, without a story of its own to tell, so much “in need of something but incapable of anything”, in Morton’s telling phrase, that you can’t even really say that it’s dead.

So why did we go there, when we already knew that it was, in the words of US columnist Milton Mayer, a “pulverised rubble… like Dresden in May or Hiroshima in August”?

Apollo was the US’s biggest, brashest entry in its heart-stoppingly exciting – and terrifying – political and technological competition with the Soviet Union. This is the matter of Stone and Anders’s Chasing the Moon, as a full a history as one could wish for, clear-headed about the era and respectful of the extraordinary efforts and qualities of the people involved.

But while Morton is no less moved by Apollo’s human adventure, we turn to his book for a cooler and more distant view. Through Morton’s eyes we begin to see, not only what the moon actually looks like (meaningless, flat, gentle, a South Downs gone horribly wrong) but why it conjures so much disbelief in those who haven’t been there.

A year after the first landing the novelist Norman Mailer joked: “In another couple of years there will be people arguing in bars about whether anyone even went to the Moon.” He was right. Claims that the moon landing were fake arose the moment the Saturn Vs stopped flying in 1972, and no wonder. In a deep and tragic sense Apollo was fake, in the sense that it didn’t deliver the world it had promised.

And let’s be clear here: the world it promised would have been wonderful. Never mind the technology: that was never the core point. What really mattered was that at the height of the Vietnam war, we seemed at last to have found that wonderful moral substitute for war. “All of the universe doesn’t care if we exist or not,” Ray Bradbury wrote, “but we care if we exist… This is the proper war to fight.”

Why has space exploration not united the world around itself? It’s easy to blame ourselves and our lack of vision. “It’s unfortunate,” Lyndon Johnson once remarked to the astronaut Wally Schirra, “but the way the American people are, now that they have developed all of this capability, instead of taking advantage of it, they’ll probably just piss it all away…” This is the mordant lesson of Stone and Andres’s otherwise uplifting Chasing the Moon.

Oliver Morton’s The Moon suggests a darker possibility: that the fault lies with the Moon itself, and, by implication, with everything that lies beyond our little home.

Morton’s Moon is a place defined by absences, gaps, and silence. He makes apoetry of it, for a while, he toys with thoughts of future settlement, he explores the commercial possibilities. In the end, though, what can this uneventful satellite of ours ever possibly be, but what it is: “just dry rocks jumbled”?

 

 

The three-dimensional page

Visiting Thinking 3D: Leonardo to the present at Oxford’s Weston Library for the Financial Times, 20 March 2019

Exhibitions hitch themselves to the 500th anniversary of Leonardo da Vinci at their peril. How do you do justice to a man whose life’s work provides the soundtrack to your entire culture? Leonardo dabbled his way into every corner of intellectual endeavour, and carved out several tasty new corners into the bargain. For heaven’s sake, he dreamt up a glass vessel to demonstrate the dynamics of fluid flow in the aortic valve of the human heart: modern confirmation that he was right (did you doubt it?) had to wait for the cardiologist Robin Choudhury and a paper written in 2014.

Daryl Green and Laura Moretti, curators of Thinking 3D at Oxford’s Weston Library, are wise to park this particular story at the far end of their delicate, nuanced, spiderweb of an exhibition into how artists and scientists, from Leonardo to now, have learned to convey three-dimensional objects on the page.

Indeed they do very good job of keeping You Know Who contained. This is a show made up of books, mostly, and Leonardo came too soon to take full advantage of print. He was, anyway, far too jealous of his own work to consign it to the relatively crude reproductive technologies of his day. Only one of his drawings exists in printed form — a stellated dodecahedron, drawn for his friend Luca Pacioli’s De Divina Proportione of 1509. It’s here for the viewing, alongside other contemporary attempts at geometrical drawing. Next to Leonardo, they are hardly more than doodles.

A few of Leonardo’s actual drawings — the revolving series here is drawn from the Royal Collection and the British Library — served to provoke, more than to inspire, the advances in 3D visualisation that followed. In a couple of months the aortic valve story will be pulled from the show, its place taken by astrophysicist Steven Balbus’s attempts to visualise black holes. (There’s a lot of ground to cover, and very little room, so the exhibition will be changing some elements regularly during the run.) When that happens, will Leonardo’s presence in this exhibition begin to feel gratuitous? Probably not: Leonardo is the ultimate Man Who Came to Dinner: once put inside your head there’s no getting rid of him.

Thinking 3D is more than just this exhibition: the year-long project promises events, talks, conferences and workshops, not to mention satellite shows. (Under the skin: illustrating the human body, which just ended at the Royal College of Physicians in London, was one of these.) The more one learns about the project, the more it resembles Stephen Leacock’s Lord Ronald, who flung himself upon his horse and rode madly off in all directions — and the more impressive the coherence Green and Moretti have achieved here.

There are some carefully selected geegaws. A stereoscope through which one can study Arthur Thomson stereographic Anatomy of the Human Eye, published in 1912. The nation’s first look at Bill Gates’s Codescope, an interactive kiosk with a touch screen that lets you explore the Codex Leicester, a notebook of Leonardo’s that Gates bought in 1994. Even a shelf full of 3D-printed objects you are welcome to fondle, like Linus with his security blanket, as you wander around the exhibition. This last jape works better than you’d think: by relating vision to touch, it makes us properly aware of all the mental tricks we have to perform, in order to to realise 3D forms in pictures.

But books are the meat of the matter: arranged chronologically along one wall, and under glass in displays that show how the same theme has been handled at different times. Start at the clean, complex lines of the dodecahedron and pass, via architecture (the coliseum) and astronomy (the Moon) to the fleshy ghastliness of the human eyeball.

Conveying depth by drawing makes geometry comprehensible. It also, and in particular, transforms three areas of fundamental intellectual enquiry: anatomy, architecture, and astronomy.

Today, when we think of 3D visualisation, we think first of architecture. (It’s an association forged, in large part, in the toils of countless videogames: never mind the plot, gawp at all that visionary pixelcrete!). But because architecture operates at a more-or-less human-scale, it’s actually been rather slow to pick up on the power of 3D visualisation. With intuition and craft skill to draw upon, who needs axonometry? The builders of the great Mediaeval cathedrals managed quite happily without any such hifalutin drawing techniques, and it wasn’t until Auguste Choisy’s Histoire de l’architecture of 1899 that a drawing style that had already transformed carpentry, machinery, and military architecture finally found favour with architects. (Arguably, the profession has yet to come down off the high this occasioned. Witness the number of large buildings that look, for all their bulk, like scale models, their shapes making sense only from the most arbitrary angles.)

Where the scale is too small or too large for intuition and common sense to work, 3D visualisation has been most useful, and most beautiful. Andreas Vesalius’s De humani corporis fabrica librorum epitome (1543) stands here for an entire genre of “fugitive sheets” — compendiums of exquisite anatomical drawings with layered flaps, peeled back by the reader to reveal the layers of the body as one might discover them during a dissection. Because these documents were practical surgical guides, they received rough treatment, and hardly any survive. Those that do (though not the one here, thank God) are often covered with mysterious stains.

Less gruesome, but at the same time less immediately communicative, are the various attempts here to render the cosmos on paper. Robert Fludd’s black square from his Utriusque Cosmi (1617-21), depicts the void immediately prior to creation. Et sic in infinitum (“And so on to infinity”) run the words on each side of this eloquent blank.

Thinking 3D explores territories where words tangle incoherently and only pictures will suffice — then leaps giggling into a void where rational enquiry collapses and only artworks and acts of mischief like Fludd’s manage to convey anything at all. All this in a space hardly bigger than two average living rooms. It’s a show that repays — indeed, demands — patience. Put in the requisite effort, though, and you’ll find it full of wonders.

A world that has run out of normal

Reading The Uninhabitable Earth: A Story of the Future by David Wallace-Wells for the Telegraph, 16 February 2019

As global temperatures rise, and the mean sea-level with them, I have been tracing the likely flood levels of the Thames Valley, to see which of my literary rivals will disappear beneath the waves first. I live on a hill, and what I’d like to say is: you’ll be stuck with me a while longer than most. But on the day I had set aside to consume David Wallace-Wells’s terrifying account of climate change and the future of our species (there isn’t one), the water supply to my block was unaccountably cut off.

Failing to make a cup of tea reminded me, with some force, of what ought to be obvious: that my hill is a post-apocalyptic death-trap. I might escape the floods, but without clean water, food or power, I’ll be lucky to last a week.

The first half of The Uninhabitable Earth is organised in chapters that deal separately with famines, floods, fires, droughts, brackish oceans, toxic winds and war and all the other manifest effects of anthropogenic climate change (there are many more than four horsemen in this Apocalypse). At the same time, the author reveals, paragraph by paragraph, how these ever-more-frequent disasters join up in horrific cascades, all of which erode human trust to the point where civic life collapses.

The human consequences of climate disaster are going to be ugly. When a million refugees from the Syrian civil war started arriving in Europe in 2017, far-right parties entered mainstream political discourse for the first time in decades. By 2050, the United Nations predicts that Europe will host 200 million refugees. So buckle up. The disgust response with which we greet strangers on our own land is something we conscientiously suppress these days. But it’s still there: an evolved response that in less sanitary times got us through more than one plague.

That such truths go largely unspoken says something about the cognitive dissonance in which our culture is steeped. We just don’t have the mental tools to hold climate change in our heads. Amitav Ghosh made this clear enough in The Great Derangement (2016), which explains why the traditional novel is so hopeless at handling a world that has run out of normal, forgotten how to repeat itself, and will never be any sort of normal again.

Writers, seeking to capture the contemporary moment, resort to science fiction. But the secret, sick appeal of post-apocalyptic narratives, from Richard Jefferies’s After London on, is that in order to be stories at all their heroes must survive. You can only push nihilism so far. J G Ballard couldn’t escape that bind. Neither could Cormac McCarthy. Despite our most conscientious attempts at utter bloody bleakness, the human spirit persists.

Wallace-Wells admits as much. When he thinks of his own children’s future, denizens of a world plunging ever deeper into its sixth major extinction event, he admits that despair melts and his heart fills with excitement. Humans will cling to life on this ever less habitable earth for as long as they can. Quite right, too.

Wallace-Wells is deputy editor of New York magazine. In July 2017 he wrote a cover story outlining worst-case scenarios for climate change. His pessimism proved salutary: The Uninhabitable Earth has been much anticipated.

In the first half of the book the author channels former US vice-president Al Gore, delivering a blizzard of terrifying facts, and knocking socks off his predecessor’s An Inconvenient Truth (2006) not thanks to his native gifts (considerable as they are) but because the climate has deteriorated since then to the point where its declines can now be observed directly, and measured over the course of a human lifetime.

More than half the extra carbon dioxide released into the atmosphere by burning fossil fuels has been added in the past 30 years. This means that “we have done as much damage to the fate of the planet and its ability to sustain human life and civilization since Al Gore published his first book on climate than in all the centuries – all the millennia – that came before.” (4) Oceans are carrying at least 15 per cent more heat energy than they did in 2000. 22 per cent of the earth’s landmass was altered by humans just between 1992 and 2015. In Sweden, in 2018, forests in the Arctic Circle went up in flames. On and on like this. Don’t shoot the messenger, but “we have now engineered as much ruin knowingly as we ever managed in ignorance.”

The trouble is not that the future is bleak. It’s that there is no future. We’re running out of soil. In the United States, it’s eroding ten times faster than it is being replaced. In China and India, soil is disappearing thirty to forty times as fast. Wars over fresh water have already begun. The CO2 in the atmosphere has reduced the nutrient value of plants by about thirty per cent since the 1950s. Within the lifetimes of our children, the hajj will no longer be a feature of Islamic practice: the heat in Mecca will be such that walking seven times counterclockwise around the Kaaba will kill you.

This book may come to be regarded as last truly great climate assessment ever made. (Is there even time left to pen another?) Some of the phrasing will give persnickety climate watchers conniptions. (Words like “eventually” will be a red rag for them, because they catalyse the reader’s imagination without actually meaning anything.) But the research is extensive and solid, the vision compelling and eminently defensible.

Alas, The Uninhabitable Earth is also likely to be one of the least-often finished books of the year. I’m not criticising the prose, which is always clear and engaging and often dazzling. But It’s simply that the more we are bombarded with facts, the less we take in. Treating the reader like an empty bucket into which facts may be poured does not work very well, and even less well when people are afraid of what you are telling them. “If you have made it this far, you are a brave reader,” Wallace Wells writes on page 138. Many will give up long before then. Climate scientists have learned the hard way how difficult it is to turn fact into public engagement.

The second half of The Uninhabitable Earth asks why our being made aware of climate disaster doesn’t lead to enough reasonable action being taken against it. There’s a nuanced mathematical account to be written of how populations reach carrying capacity, run out of resources, and collapse; and an even more difficult book that will explain why we ever thought human intelligence would be powerful enough to elude this stark physical reality.

The final chapters of The Uninhabitable Earth provide neither, but neither are they narrowly partisan. Wallace-Wells mostly resists the temptation to blame the mathematical inevitability of our species’ growth and decline on human greed. The worst he finds to say about the markets and market capitalism – our usual stock villains – is not that they are evil, or psychopathic (or certainly no more evil or psychopathic than the other political experiments we’ve run in the past 150 years) but that they are not nearly as clever as we had hoped they might be. There is a twisted magnificence in the way we are exploiting, rather than adapting to the End Times. (Whole Foods in the US, we are told, is now selling “GMO-free” fizzy water.)

The Paris accords of 2016 established keeping warming to just two degrees as a global goal. Only a few years ago we were hoping for a rise of just 1.5 degrees. What’s the difference? According to the IPCC, that half-degree concession spells death for about 150 million people. Without significantly improved pledges, however, the IPCC reckons that instituting the Paris accords overnight (and no-one has) will still see us topping 3.2 degrees of warming. At this point the Antarctic’s ice sheets will collapse, drowning Miami, Dhaka, Shanghai, Hong Kong and a hundred other cities around the world. (Not my hill, though.)

And to be clear: this isn’t what could happen. This is what is already guaranteed to happen. Greenhouse gases work on too long a timescale to avoid it. “You might hope to simply reverse climate change;” writes Wallace-Wells: “you can’t. It will outrun all of us.”

“How widespread alarm will shape our ethical impulses toward one another, and the politics that emerge from those impulses,” says Wallace-Wells,”is among the more profound questions being posed by the climate to the planet of people it envelopes.”

My bet is the question will never tip into public consciousness: that, on the contrary, we’ll find ways, through tribalism, craft and mischief, to engineer what Wallace-Wells dubs “new forms of indifference”, normalising climate suffering, and exploiting novel opportunities, even as we live and more often die through times that will never be normal again.

 A History of Silence reviewed: Unlocking the world of infinitely small noises

Reading Alain Corbin’s A History of Silence (Polity Press) for The Telegraph, 3 September 2018

The Orientalist painter Eugene Fromentin adored the silence of the Sahara. “Far from oppressing you,” he wrote to a friend, “it inclines you to light thoughts.” People assume that silence, being an absence of noise, is the auditory equivalent of darkness. Fromentin was having none of it: “If I may compare aural sensations to those of sight, then silence that reigns over vast spaces is more a sort of aerial transparency, which gives greater clarity to the perceptions, unlocks the world of infinitely small noises, and reveals a range of inexpressible delights.” (26-27)

Silence invites clarity. Norwegian explorer and publisher Erling Kagge seized on this nostrum for his international bestseller Silence in the Age of Noise, published in Norway in 2016, the same year Alain Corbin’s A History of Silence was published in France.

People forget this, but Kagge’s short, smart airport read was more tough-minded than the fad it fed. In fact, Kagge’s crowd-pleasing Silence and Corbin’s erudite History make surprisingly good travelling companions.

For instance: while Corbin was combing through Fromentin’s Un été dans le Sahara of 1856, Kagge was talking to his friend the artist Marina Abramovic, whose experience of desert silence was anything but positive: “Despite the fact that everything end completely quiet around her, her head was flooded with disconnected thoughts… It seemed like an empty emptiness, while the goal is to experience a full emptiness, she says.” (115)

Abramovic’s trouble, Kagge tells us, was that she couldn’t stop thinking. She wanted a moment of Fromentinesque clarity, but her past and her future proved insuperable obstacles: the moment she tore are mind away from one, she found herself ruminating over the other.

It’s a common complaint, according to Kagge: “The present hurts, and our response is to look ceaselessly for fresh purposes that draw our attention outwards, away from ourselves.” (37)

Why should this be? The answer is explored in Corbin’s book, which is one of those cultural histories which comes very close to becoming a virtually egoless compendium of quotations. Books of this sort can be a terrible mess but Corbin’s architecture, on the contrary, is as stable as it is artfully concealed. This is a temple: not a pile.

The present, properly attended to, alone and in silence, reveals time’s awful scale. When we think about the past or the future, what we’re actually doing is telling ourselves stories. It’s in the present moment, if we dare attend to it, that we glimpse the Void.

Jules Michelet, in his book La Montaigne (1872), recognised that the great forces shaping human destiny are so vast as to be silent. The process of erosion, for example, “is the more successfully accomplished in silence, to reveal, one morning, a desert of hideous nakedness, where nothing shall ever again revive.” The equivalent creative forces are hardly less awful: the “silent toil of the innumerably polyps” of a coral reef, for example, creating “the future Earth; on whose surface, perhaps, Man shall hereafter reside”.

No wonder so many of us believe in God, when sitting alone in a quiet room for ten minutes confronts us with eternity. The Spanish Basque theologian Ignatius Loyola used to spend seven solid hours a day in silent prayer and came to the only possible conclusion: silence “cancels all rational and discursive activity, thereby enabling direct perception of the divine word.”

“God bestows, God trains, God accomplishes his work, and this can only be done in the silence that is established between the Creator and the creature.” (42-43)

Obvious as this conclusion may be, though, it could still be wrong. Even the prophet Isaiah complained: “Verily thou art a God that hides thyself”.

What if God’s not there? What if sound and fury were our only bulwark against the sucking vacuum of our own meaninglessness? It’s not only the empty vessels that make the most noise: “Men with subtle minds who are quick to see every side of a question find it hard to refrain from expressing what they think,” said Eugene Delacroix. Anyway, “how is it possible to resist giving a favourable idea of one’s mind to a man who seems surprised and pleased to hear what one is saying?”

There’s a vibrancy in tumult, a civic value in conversation, and silence is by no means always golden — a subject Corbin explores ably and at length, carrying us satisfyingly far from our 2016-vintage hygge-filled comfort zone.

Silence suggests self-control, but self-control can itself be a response to oppression. Advice to courtiers to remain silent, or at least ensure that their words are more valuable than their silence, may at worst suggest a society where there is no danger in keeping quiet, but plenty of danger in speaking.

This has certainly been the case historically among peasants, immersed as they are in a style of life that has elements of real madness about it: an undercurrent of constant hate and bitterness expressed in feuding, bullying, bickering and family quarrels, the petty mentality, the self-deprecation, the superstition, the obsessive control of daily life by a strict authoritarianism. If you don’t believe me, read Zola. His peasant silences are “first and foremost a tactic” in a milieu where plans, ambitious or tragic, “were slow to come to fruition, which meant that it was essential not to show your hand.” (93)

Corbin’s history manages to anatomise silence without fetishising it. He leaves you with mixed feelings: a neat trick to pull, in a society that’s convinced it’s drowning in noise.

It’s not true. In Paris, Corbin reminds us, forges once operated on the ground floors of buildings throughout the city. Bells of churches, convents, schools and colleges only added to the cacophony. Carriages made the level of street noise even more deafening. (61) I was reminded, reading this, of how the Victorian computer pioneer Charles Babbage died, at the age of 79, at his home in Marylebone. Never mind the urinary tract infection: his last hours were spent being driven to distraction by itinerant hurdy-gurdy players pan-handling outside his window.

If anything, contemporary society suffers from too much control of its auditory environment. Passengers travelling in trains and trams observe each other in silence; a behaviour that was once considered damn rude. Pedestrians no longer like to be greeted. Among the many silences we may welcome, comes one that we surely must deplore: the silence that accompanies the diminution of our civic life.

Pushing the boundaries

Rounding up some cosmological pop-sci for New Scientist, 24 March 2018

IN 1872, the physicist Ludwig Boltzmann developed a theory of gases that confirmed the second law of thermodynamics, more or less proved the existence of atoms and established the asymmetry of time. He went on to describe temperature, and how it governed chemical change. Yet in 1906, this extraordinary man killed himself.

Boltzmann is the kindly if gloomy spirit hovering over Peter Atkins’s new book, Conjuring the Universe: The origins of the laws of nature. It is a cheerful, often self-deprecating account of how most physical laws can be unpacked from virtually nothing, and how some constants (the peculiarly precise and finite speed of light, for example) are not nearly as arbitrary as they sound.

Atkins dreams of a final theory of everything to explain a more-or-less clockwork universe. But rather than wave his hands about, he prefers to clarify what can be clarified, clear his readers’ minds of any pre-existing muddles or misinterpretations, and leave them, 168 succinct pages later, with a rather charming image of him tearing his hair out over the fact that the universe did not, after all, pop out of nothing.

It is thanks to Atkins that the ideas Boltzmann pioneered, at least in essence, can be grasped by us poor schlubs. Popular science writing has always been vital to science’s development. We ignore it at our peril and we owe it to ourselves and to those chipping away at the coalface of research to hold popular accounts of their work to the highest standards.

Enter Brian Clegg. He is such a prolific writer of popular science, it is easy to forget how good he is. Icon Books is keeping him busy writing short, sweet accounts for its Hot Science series. The latest, by Clegg, is Gravitational Waves: How Einstein’s spacetime ripples reveal the secrets of the universe.

Clegg delivers an impressive double punch: he transforms a frustrating, century-long tale of disappointment into a gripping human drama, affording us a vivid glimpse into the uncanny, depersonalised and sometimes downright demoralising operations of big science. And readers still come away wishing they were physicists.

Less polished, and at times uncomfortably unctuous, Catching Stardust: Comets, asteroids and the birth of the solar system is nevertheless a promising debut from space scientist and commentator Natalie Starkey. Her description of how, from the most indirect evidence, a coherent history of our solar system was assembled, is astonishing, as are the details of the mind-bogglingly complex Rosetta mission to rendezvous with comet 67P/Churyumov-Gerasimenko – a mission in which she was directly involved.

It is possible to live one’s whole life within the realms of science and discovery. Plenty of us do. So it is always disconcerting to be reminded that longer-lasting civilisations than ours have done very well without science or formal logic, even. And who are we to say they afforded less happiness and fulfilment than our own?

Nor can we tut-tut at the way ignorant people today ride science’s coat-tails – not now antibiotics are failing and the sixth extinction is chewing its way through the food chain.

Physicists, especially, find such thinking well-nigh unbearable, and Alan Lightman speaks for them in his memoir Searching for Stars on an Island in Maine. He wants science to rule the physical realm and spirituality to rule “everything else”. Lightman is an elegant, sensitive writer, and he has written a delightful book about one man’s attempt to hold the world in his head.

But he is wrong. Human culture is so rich, diverse, engaging and significant, it is more than possible for people who don’t give a fig for science or even rational thinking to live lives that are meaningful to themselves and valuable to the rest of us.

“Consilience” was biologist E.O. Wilson’s word for the much-longed-for marriage of human enquiry. Lightman’s inadvertent achievement is to show that the task is more than just difficult, it is absurd.

Just how much does the world follow laws?

zebra

How the Zebra Got its Stripes and Other Darwinian Just So Stories by Léo Grasset
The Serengeti Rules: The quest to discover how life works and why it matters by Sean B. Carroll
Lysenko’s Ghost: Epigenetics and Russia by Loren Graham
The Great Derangement: Climate change and the unthinkable by Amitav Ghosh
reviewed for New Scientist, 15 October 2016

JUST how much does the world follow laws? The human mind, it seems, may not be the ideal toolkit with which to craft an answer. To understand the world at all, we have to predict likely events and so we have a lot invested in spotting rules, even when they are not really there.

Such demands have also shaped more specialised parts of culture. The history of the sciences is one of constant struggle between the accumulation of observations and their abstraction into natural laws. The temptation (especially for physicists) is to assume these laws are real: a bedrock underpinning the messy, observable world. Life scientists, on the other hand, can afford no such assumption. Their field is constantly on the move, a plaything of time and historical contingency. If there is a lawfulness to living things, few plants and animals seem to be aware of it.

Consider, for example, the charming “just so” stories in French biologist and YouTuber Léo Grasset’s book of short essays, How the Zebra Got its Stripes. Now and again Grasset finds order and coherence in the natural world. His cost-benefit analysis of how animal communities make decisions, contrasting “autocracy” and “democracy”, is a fine example of lawfulness in action.

But Grasset is also sharply aware of those points where the cause-and-effect logic of scientific description cannot show the whole picture. There are, for instance, four really good ways of explaining how the zebra got its stripes, and those stripes arose probably for all those reasons, along with a couple of dozen others whose mechanisms are lost to evolutionary history.

And Grasset has even more fun describing the occasions when, frankly, nature goes nuts. Take the female hyena, for example, which has to give birth through a “pseudo-penis”. As a result, 15 per cent of mothers die after their first labour and 60 per cent of cubs die at birth. If this were a “just so” story, it would be a decidedly off-colour one.

The tussle between observation and abstraction in biology has a fascinating, fraught and sometimes violent history. In Europe at the birth of the 20th century, biology was still a descriptive science. Life presented, German molecular biologist Gunther Stent observed, “a near infinitude of particulars which have to be sorted out case by case”. Purely descriptive approaches had exhausted their usefulness and new, experimental approaches were developed: genetics, cytology, protozoology, hydrobiology, endocrinology, experimental embryology – even animal psychology. And with the elucidation of underlying biological process came the illusion of control.

In 1917, even as Vladimir Lenin was preparing to seize power in Russia, the botanist Nikolai Vavilov was lecturing to his class at the Saratov Agricultural Institute, outlining the task before them as “the planned and rational utilisation of the plant resources of the terrestrial globe”.

Predicting that the young science of genetics would give the next generation the ability “to sculpt organic forms at will”, Vavilov asserted that “biological synthesis is becoming as much a reality as chemical”.

The consequences of this kind of boosterism are laid bare in Lysenko’s Ghost by the veteran historian of Soviet science Loren Graham. He reminds us what happened when the tentatively defined scientific “laws” of plant physiology were wielded as policy instruments by a desperate and resource-strapped government.

Within the Soviet Union, dogmatic views on agrobiology led to disastrous agricultural reforms, and no amount of modern, politically motivated revisionism (the especial target of Graham’s book) can make those efforts seem more rational, or their aftermath less catastrophic.

In modern times, thankfully, a naive belief in nature’s lawfulness, reflected in lazy and increasingly outmoded expressions such as “the balance of nature”, is giving way to a more nuanced, self-aware, even tragic view of the living world. The Serengeti Rules, Sean B. Carroll’s otherwise triumphant account of how physiology and ecology turned out to share some of the same mathematics, does not shy away from the fact that the “rules” he talks about are really just arguments from analogy.

“If there is a lawfulness to living things, few plants and animals seem to be aware of it”
Some notable conservation triumphs have led from the discovery that “just as there are molecular rules that regulate the numbers of different kinds of molecules and cells in the body, there are ecological rules that regulate the numbers and kinds of animals and plants in a given place”.

For example, in Gorongosa National Park, Mozambique, in 2000, there were fewer than 1000 elephants, hippos, wildebeest, waterbuck, zebras, eland, buffalo, hartebeest and sable antelopes combined. Today, with the reintroduction of key predators, there are almost 40,000 animals, including 535 elephants and 436 hippos. And several of the populations are increasing by more than 20 per cent a year.

But Carroll is understandably flummoxed when it comes to explaining how those rules might apply to us. “How can we possibly hope that 7 billion people, in more than 190 countries, rich and poor, with so many different political and religious beliefs, might begin to act in ways for the long-term good of everyone?” he asks. How indeed: humans’ capacity for cultural transmission renders every Serengeti rule moot, along with the Serengeti itself – and a “law of nature” that does not include its dominant species is not really a law at all.

Of course, it is not just the sciences that have laws: the humanities and the arts do too. In The Great Derangement, a book that began as four lectures presented at the University of Chicago last year, the novelist Amitav Ghosh considers the laws of his own practice. The vast majority of novels, he explains, are realistic. In other words, the novel arose to reflect the kind of regularised life that gave you time to read novels – a regularity achieved through the availability of reliable, cheap energy: first, coal and steam, and later, oil.

No wonder, then, that “in the literary imagination climate change was somehow akin to extraterrestrials or interplanetary travel”. Ghosh is keenly aware of and impressively well informed about climate change: in 1978, he was nearly killed in an unprecedentedly ferocious tornado that ripped through northern Delhi, leaving 30 dead and 700 injured. Yet he has never been able to work this story into his “realist” fiction. His hands are tied: he is trapped in “the grid of literary forms and conventions that came to shape the narrative imagination in precisely that period when the accumulation of carbon in the atmosphere was rewriting the destiny of the Earth”.

The exciting and frightening thing about Ghosh’s argument is how he traces the novel’s narrow compass back to popular and influential scientific ideas – ideas that championed uniform and gradual processes over cataclysms and catastrophes.

One big complaint about science – that it kills wonder – is the same criticism Ghosh levels at the novel: that it bequeaths us “a world of few surprises, fewer adventures, and no miracles at all”. Lawfulness in biology is rather like realism in fiction: it is a convention so useful that we forget that it is a convention.

But, if anthropogenic climate change and the gathering sixth mass extinction event have taught us anything, it is that the world is wilder than the laws we are used to would predict. Indeed, if the world really were in a novel – or even in a book of popular science – no one would believe it.

Beware the indeterminate momentum of the throbbing whole

2ndfromright_speculative-realism-materialism

Graham Harman (2nd from right) and fellow speculative materialists in 2007

 

In 1942, the Argentine writer Jorge Luis Borges cooked up an entirely fictitious “Chinese” encyclopedia entry for animals. Among its nonsensical subheadings were “Embalmed ones”, “Stray dogs”, “Those that are included in this classification” and “Those that, at a distance, resemble flies”.

Explaining why these categories make no practical sense is a useful and enjoyable intellectual exercise – so much so that in in 1966 the French philosopher Michel Foucault wrote an entire book inspired by Borges’ notion. Les mots et les choses (The Order of Things) became one of the defining works of the French philosophical movement called structuralism.

How do we categorise the things we find in the world? In Immaterialism, his short and very sweet introduction to his own brand of philosophy, “object-oriented ontology”, the Cairo-based philosopher Graham Harman identifies two broad strategies. Sometimes we split things into their ingredients. (Since the enlightenment, this has been the favoured and extremely successful strategy of most sciences.) Sometimes, however, it’s better to work in the opposite direction, defining things by their relations with other things. (This is the favoured method of historians and critics and other thinkers in the humanities.)

Why should scientists care about this second way of thinking? Often they don’t have to. Scientists are specialists. Reductionism – finding out what things are made of – is enough for them.

Naturally, there is no hard and fast rule to be made here, and some disciplines – the life sciences especially – can’t always be reducing things to their components.

So there have been attempts to bring this other, “emergentist” way of thinking into the sciences. One of the most ingenious was the “new materialism” of the German entrepreneur (and Karl Marx’s sidekick) Friedrich Engels. One of Engels’s favourite targets was the Linnaean system of biological classification. Rooted in formal logic, this taxonomy divides all living things into species and orders. It offers us a huge snapshot of the living world. It is tremendously useful. It is true. But it has limits. It cannot record how one species may, over time, give rise to some other, quite different species. (Engels had great fun with the duckbilled platypus, asking where that fitted into any rigid scheme of things.) Similarly, there is no “essence” hiding behind a cloud of steam, a puddle of water, or a block of ice. There are only structures, succeeding each other in response to changes in the local conditions. The world is not a ready-made thing: it is a complex interplay of processes, all of which are ebbing and flowing, coming into being and passing away.

So far so good. Applied to science, however, Engels’ schema turn out to be hardly more than a superior species of hand-waving. Indeed, “dialectical materialism” (as it later became known) proved so unwieldy, it took very few years of application before it became a blunt weapon in the hands of Stalinist philosophers who used it to demotivate, discredit and disbar any scientific colleague whose politics they didn’t like.

Harman has learned the lessons of history well. Though he’s curious to know where his philosophy abuts scientific practice (and especially the study of evolution), he is prepared to accept that specialists know what they are doing: that rigor in a narrow field is a legitimate way of squeezing knowledge out of the world, and that a 126-page A-format paperback is probably not the place to reinvent the wheel.

What really agitates him, fills his pages, and drives him to some cracking one-liners (this is, heavens be praised, a *funny* book about philosophy) is the sheer lack of rigour to be found in his own sphere.

While pillorying scientists for treating objects as superficial compared with their tinest pieces, philosophers in the humanities have for more than a century been leaping off the opposite cliff, treating objects “as needlessly deep or spooky hypotheses”. By claiming that an object is nothing but its relations or actions they unknowingly repeat the argument of the ancient Megarians , “who claimed that no one is a house-builder unless they are currently building a house”. Harman is sick and tired of this intellectual fashion, by which “‘becoming’ is blessed as the trump card of innovators, while ‘being’ is cursed as a sad-sack regession to the archaic philosophies of olden times”.

Above all, Harman has had it with peers and colleagues who zoom out and away from every detailed question, until the very world they’re meant to be studying resembles “the indeterminate momentum of the throbbing whole” (and this is not a joke — this is the sincerely meant position statement of another philosopher, a friendly acquaintance of his, Jane Bennett).

So what’s Harman’s solution? Basically, he wants to be able to talk unapologetically about objects. He explores a single example: the history of the Dutch East India Company. Without toppling into the “great men” view of history – according to which a world of inanimate props is pushed about by a few arbitrarily privileged human agents – he is out to show that the EIC was an actual *thing*, a more-or-less stable phenomenon ripe for investigation, and not simply a rag-bag collection of “human practices”.

Does his philosophy describe the Dutch East India Company rigorously enough for his work to qualify as real knowledge? I think so. In fact I think he succeeds to a degree which will surprise, reassure and entertain the scientifically minded.

Be in no doubt: Harman is no turncoat. He does not want the humanities to be “more scientific”. He wants them to be less scientific, but no less rigorous, able to handle, with rigour and versatility, the vast and teeming world of things science cannot handle: “Hillary Clinton, the city of Odessa, Tolkein’s imaginary Rivendell… a severed limb, a mixed herd of zebras and wildebeest, the non-existent 2016 Chicago Summer Olympics, and the constellation of Scorpio”.

Immaterialism
Graham Harman
Polity, £9.99

The tomorrow person

gettyimages-480014817-800x533

You Belong to the Universe: Buckminster Fuller and the future by Jonathon Keats
reviewed for New Scientist, 11 June 2016.

 

IN 1927 the suicidal manager of a building materials company, Richard Buckminster (“Bucky”) Fuller, stood by the shores of Lake Michigan and decided he might as well live. A stern voice inside him intimated that his life after all had a purpose, “which could be fulfilled only by sharing his mind with the world”.

And share it he did, tirelessly for over half a century, with houses hung from masts, cars with inflatable wings, a brilliant and never-bettered equal-area map of the world, and concepts for massive open-access distance learning, domed cities and a new kind of playful, collaborative politics. The tsunami that Fuller’s wing flap set in motion is even now rolling over us, improving our future through degree shows, galleries, museums and (now and again) in the real world.

Indeed, Fuller’s”comprehensive anticipatory design scientists” are ten-a-penny these days. Until last year, they were being churned out like sausages by the design interactions department at the Royal College of Art, London. Futurological events dominate the agendas of venues across New York, from the Institute for Public Knowledge to the International Center of Photography. “Science Galleries”, too, are popping up like mushrooms after a spring rain, from London to Bangalore.

In You Belong to the Universe, Jonathon Keats, himself a critic, artist and self-styled “experimental philosopher”, looks hard into the mirror to find what of his difficult and sometimes pantaloonish hero may still be traced in the lineaments of your oh-so-modern “design futurist”.

Be in no doubt: Fuller deserves his visionary reputation. He grasped in his bones, as few have since, the dynamism of the universe. At the age of 21, Keats writes, “Bucky determined that the universe had no objects. Geometry described forces.”

A child of the aviation era, he used materials sparingly, focusing entirely on their tensile properties and on the way they stood up to wind and weather. He called this approach “doing more with less”. His light and sturdy geodesic dome became an icon of US ingenuity. He built one wherever his country sought influence, from India to Turkey to Japan.

Chapter by chapter, Keats asks how the future has served Fuller’s ideas on city planning, transport, architecture, education. It’s a risky scheme, because it invites you to set Fuller’s visions up simply to knock them down again with the big stick of hindsight. But Keats is far too canny for that trap. He puts his subject into context, works hard to establish what would and would not be reasonable for him to know and imagine, and explains why the history of built and manufactured things turned out the way it has, sometimes fulfilling, but more often thwarting, Fuller’s vision.

This ought to be a profoundly wrong-headed book, judging one man’s ideas against the entire recent history of Spaceship Earth (another of Fuller’s provocations). But You Belong to the Universe says more about Fuller and his future in a few pages than some whole biographies, and renews one’s interest – if not faith – in all those graduate design shows.

The past is like Baltimore: there is no there there

mg22630152.400-1_1200

Longing for the Bomb: Oak Ridge and atomic nostalgia, Lindsey A. Freeman (University of North Carolina Press)
Seeing Green: The use and abuse of American environmental images, Finis Dunaway (Chicago University Press)
for New Scientist  (4 April 2015),

THE past can’t be re-experienced. It leaves only traces and artefacts, which we constantly shuffle, sort, discard and recover, in an obsessive effort to recall where we have come from. This is as true of societies as it is of individuals.

Lindsey Freeman, an assistant professor of sociology at the State University of New York, Buffalo, is the grandchild of first-generation residents of Oak Ridge, Tennessee. Once a “secret city”, where uranium was enriched for the US’s Manhattan Project, Oak Ridge opened its gates to the world in 1949 as America’s first “Atomic City”: a post-war utopia of universal healthcare, zero unemployment and state-owned housing.

In Longing for the Bomb, Freeman describes how residents of Oak Ridge dreamed up an identity for themselves as a new breed of American pioneer. He visits Oak Ridge’s Y-12 National Security Complex (an “American Uranium Center of Excellence”) during its Secret City Festival, boards its Scenic Excursion Train and cannot decide if converting a uranium processing site into a wildlife reserve is good or bad.

It would have been easy to turn the Oak Ridge story into something sinister, but Freeman is too generous a writer for that. Oak Ridge owes its existence to the geopolitical business of mass destruction, but its people have created stories that keep them a proud and happy community. Local trumps global, every time.

This is good for the founders of communities, but a problem for those who want to wake up those communities to the need for change. As historian Finis Dunaway puts it in Seeing Green, his history of environmental imagery, “even as media images have made the environmental crisis visible to a mass public, they often have masked systemic causes and ignored structural inequalities”.

Reading this, I was reminded of a talk by author Andrew Blackwell, where he told us just how hard it is to take authentic pictures of some of the world’s most polluted places. Systemic problems do not photograph well. Some manipulation is unavoidable.

Dunaway knows this. Three months after the nuclear accident at Three Mile Island in 1979the worst radioactive spill in US history occurred near Church Rock, New Mexico, on lands held by the Navajo nation. It took a week for the event to be reported, once, on a single news channel.

The remoteness of the site and a lack of national interest in Native American affairs might explain the silence but, as Dunaway points out, the absence of an iconic and photogenic cooling tower can’t have helped.

The iconic environmental images Dunaway discusses are essentially advertisements, and adverts address individuals. They assume that radical social change will catch on like any other consumer good. For example, the film An Inconvenient Truth, chock full of eye-catching images, is the acme of the sincere advertiser’s art, and its maker, former US vice-president and environmental campaigner Al Gore, is a vocal proponent of carbon offset and other market initiatives.

Dunaway, though, argues that you cannot market radical social action. For him, the moral seems to be that sometimes, you just have to give the order – as Franklin Roosevelt did when he made Oak Ridge a city.

A comic novel about the death of God

danielkehlmannf103_v-contentgross

F by Daniel Kehlmann reviewed for the Guardian

It cannot be an easy thing to write a comic novel about the death of God. Still, the German novelist Daniel Kehlmann may just have pulled it off. “F” is the protagonist of a book within a book, the debut novel of Arthur Friedland, a rather disorganised buffoon who never had any success as a writer until an encounter with a hypnotist gave his life its chilly purpose: “This is an order, and you’re going to follow it because you want to follow it, and you want to because I’m ordering you, and I’m ordering you because you want me to give the order. Starting today, you’re going to make an effort. No matter what it costs. Repeat!”

My Name Is No One is so exuberantly nihilistic, its readers are throwing themselves off TV transmission towers. As Kehlmann says: “The sentences are well constructed, the narrative has a powerful flow, the reader would be enjoying the text were it not for a persistent feeling of somehow being mocked.”

If Kehlmann played this intertextual game to the hilt – if F itself were as unforgiving as Arthur’s novel – then we would be looking at a less important book, as well as a less enjoyable one: some Johnny-come-lately contribution to the French nouvelle vague. The spirit of Alain Robbe-Grillet, the movement’s greatest exponent, illuminates the scene in which Arthur takes his granddaughter to an art museum to study a picture by her missing uncle: “She stepped even closer, and immediately everything dissolved. There were no more people any more, no more little flags, no anchor, no bent watch. There were just some tiny bright patches of colour above the main deck. The white of the naked canvas shone through in several places, and even the ship was a mere assemblage of lines and dots. Where had it all gone?”

Sign up for Bookmarks: discover new books in our weekly email
Read more
There are many such moments, they are all as beautifully judged as this one, and they are not the point. The point of F is not its humour (though Kehlmann, like Robbe-Grillet, can be very funny indeed), but its generosity. Arthur’s three sons, in their turn, make superhuman efforts to give their lives significance, and these efforts tangle and trip over each other to generate the comic business of the book. The eldest, Martin, a Rubik’s Cube expert, embraces the priesthood despite his lack of faith. Of Arthur’s two sons by his second marriage, Eric enters the glass-and-steel world of high finance to help control his fear of cramped spaces. His twin brother, Ivan, is a would-be painter turned art dealer, and author of Mediocrity As an Aesthetic Phenomenon.

“When I was young, vain, and lacking all experience,” he recalls, “I thought the art world was corrupt. Today I know that’s not true. The art world is full of lovable people, full of enthusiasts, full of longing and truth. It is art itself as a sacred principle that unfortunately doesn’t exist.”

Advertisement
Ivan, like all the others, lives in a nihilistic universe, but he is not himself nihilistic. It worries him that the world cannot live up to his expectations and those of the people he admires. These people include his lover Heinrich Eulenboeck, an artist with a true calling but only mediocre ability. What kind of world is it that plays such a trick on a person? “How do you live with that, why do you keep on going?”

The answer seems to be love. In a godless world, love counts for a great deal. And failing love, ordinary human decency goes a long way. Since Kurt Vonnegut died, there has really been no one to tell us this; the reminder is welcome.

F is again translated by Carol Brown Janeway, but it is a better book than Kehlmann’s last, Fame, whose narrative gymnastics caused characters to lose or swap their identities, and even to topple into their own or other people’s fictions. Fame was knowing, driven by its own absurdity. F is about the world’s absurdity, and this makes a huge difference morally. The world is big, and ultimately unknowable, and life is short and memory pitifully limited.

In the absence of God, Kehlmann’s protagonists hold themselves to account, and they give themselves hell. Sometimes, they give each other hell. “Something terrible has happened and the people seem to be wanting to cover it up. If you were to look a little longer, hunt a little better for clues, you’d be able to figure it out, or at least you think so. But if you step back, the details disappear and all that remains is a colourful street scene: bright, cheerful, full of life.”

It is very hard to express how funny this all is. But laughter matters most in the dark.