The world’s biggest money machine

Reading Who Owns This Sentence by David Bellos and Alexandre Montagu for the Telegraph, 3 January 2024

Is there such a thing as intellectual property? Once you’ve had an idea, and disseminated it through manuscript or sculpture, performance or song, is it still yours?

The ancients thought so. Long before copyright was ever dreamed of, honour codes policed the use and reuse of the work of poets and playwrights, and throughout the history of the arts, proven acts of plagiarism have brought down reputational damage sufficient to put careless and malign scribblers and daubers out of business.

At the same time, it has generally been acceptable to repurpose a work, for satire or even for further development. Pamela had many more adventures outside of Samuel Richardson’s novel than within it, though (significantly) it is Richardson’s original novel that people still buy.

No one in the history of the world has ever argued that artists should not be remunerated. Nor has the difference between an ingenious repurposing of material and its fraudulent copy ever been particularly hard to spot. And though there will always be edge cases, that, surely, is where the law steps in, codifying natural justice in a way useful to sincere litigants. So you would think.

Alexandre Montagu, an intellectual property lawyer, and David Bellos, a literary academic, think otherwise. Their forensic, fascinating history of copyright reveals a highly contingent history — full of ambiguity and verbal sophistry, as meanings shift and interests evolve.

The idea of copyright arose from state control of the media. This arose in response to the advent of cheap unregulated printing, which had fostered the creation and circulation of “scandalous, false and politically dangerous trash”. (That social media have dragged us back to the 17th century is a point that hardly needs rehearsing.)

In England, the Licensing of the Press Act of 1662 gave the Stationer’s Company an exclusive right to publish books. Wisely, such a draconian measure expired after a set term, and in 1710 the Statute of Anne established a rather more author-friendly arrangement. Authors would “own” their own work for 28 years — they would possess it, and they would have to answer for it. They could also assign their rights to others to see that this work was disseminated. Publishers, being publishers, assumed such rights then belonged to them in perpetuity, making what Daniel Defoe called a “miserable Havock” of authors’ rights law that pertains to this day.

True copyright was introduced in 1774, and the term over which an author has rights over their own work has been extended year on year; in most territories, it now covers the author’s lifetime plus seventy years. The definition of an “author” has been widened, too, to include sculptors, song-writers, furniture makers, software engineers, calico printers — and corporations.

Copyright is like the cute baby chimp you bought at the fair that grows into a fully grown chimpanzee that rips your kid’s arms off. Recent decades, the authors claim, “have turned copyright into a legal machine that restores to modern owners of content the rights and powers that eighteenth-century publishers lost, and grants them wider rights than their predecessors ever thought of asking for.”

And don’t imagine for a second that these owners are artists. Bellos and Montagu trace all the many ways contemporary creatives and their families are forced into surrendering their rights to an industry that now controls between 8 and 12 per cent of the US economy and is, the authors say, “a major engine of inequality in the twenty-first century”.

Few predicted that 18th-century copyright, there to protect the interests of widows and orphans, would have evolved into an industry that in 1996 seriously tried to charge girl-scout camp organisers for singing “God Bless America” around the campfire; and actually has managed to assert in court that acts of singular human genius are responsible for everyday items ranging from sporks to inflatable banana costumes.

Modern copyright’s ability to sequester and exploit creations of every kind for three or four generations is, the authors say, the engine driving “the biggest money machine the world has seen”, and one of the more disturbing aspects of this development is the lack of accompanying public interest and engagement.

Bellos and Montagu have extracted an enormous amount of fun out of their subject, and have sauced their sardonic and playful prose with buckets full of meticulously argued bile. What’s not to love about a work of legal scholarship that dreams up “a song-and-dance number based on a film scene in Gone with the Wind performed in the Palace of Culture in Petropavlovsk” and how it “might well infringe The Rights Of The American Trust Bank Company”?

This is not a book about “information wanting to be free” or any such claptrap. It is about a whole legal field failing in its mandate, and about how easily the current dispensation around intellectual property could come crumbling down. It is also about how commonly held ideas of propriety and justice might build something better in place of our current ideas of “I.P.”. Bellos and Montagu’s challenge to intellectual property law is by turns sobering and cheering: doing better than this will hardly be rocket science.

The tools at our disposal

Reading Index, A History of the, by Dennis Duncan, for New Scientist, 15 September 2021

Every once in a while a book comes along to remind us that the internet isn’t new. Authors like Siegfried Zielinski and Jussi Parikka write handsomely about their adventures in “media archaeology”, revealing all kinds of arcane delights: the eighteenth-century electrical tele-writing machine of Joseph Mazzolari; Melvil Dewey’s Decimal System of book classification of 1873.

It’s a charming business, to discover the past in this way, but it does have its risks. It’s all too easy to fall into complacency, congratulating the thinkers of past ages for having caught a whiff, a trace, a spark, of our oh-so-shiny present perfection. Paul Otlet builds a media-agnostic City of Knowledge in Brussels in 1919? Lewis Fry Richardson conceives a mathematical Weather Forecasting Factory in 1922? Well, I never!

So it’s always welcome when an academic writer — in this case London based English lecturer Dennis Duncan — takes the time and trouble to tell this story straight, beginning at the beginning, ending at the end. Index, A History of the is his story of textual search, told through charming portrayals of some of the most sophisticated minds of their era, from monks and scholars shivering among the cloisters of 13th-century Europe to server-farm administrators sweltering behind the glass walls of Silicon Valley.

It’s about the unspoken and always collegiate rivalry between two kinds of search: the subject index (a humanistic exercise, largely un-automatable, requiring close reading, independent knowledge, imagination, and even wit) and the concordance (an eminently automatable listing of words in a text and their locations).

Hugh of St Cher is the father of the concordance: his list of every word in the bible and its location, begun in 1230, was a miracle of miniaturisation, smaller than a modern paperback. It and its successors were useful, too, for clerics who knew their bibles almost by heart.

But the subject index is a superior guide when the content is unfamiliar, and it’s Robert Grosseteste (born in Suffolk around 1175) who we should thank for turning the medieval distinctio (an associative list of concepts, handy for sermon-builders), into something like a modern back-of-book index.

Reaching the present day, we find that with the arrival of digital search, the concordance is once again ascendant (the search function, Ctl-F, whatever you want to call it, is an automated concordance), while the subject index, and its poorly recompensed makers, are struggling to keep up in an age of reflowable screen text. (Sewing embedded active indexes through a digital text is an excellent idea which, exasperatingly, has yet to catch on.)

Running under this story is a deeper debate, between people who want to access their information quickly, and people (especially authors) who want people to read books from beginning to end.

This argument about how to read has been raging literally for millennia, and with good reason. There is clear sense in Socrates’ argument against reading itself, as recorded in Plato’s Phaedrus (370 BCE): “You have invented an elixir not of memory, but of reminding,” his mythical King Thamus complains. Plato knew a thing or two about the psychology of reading, too: people who just look up what they need “are for the most part ignorant,” says Thamus, “and hard to get along with, since they are not wise, but only appear wise.”

Anyone who spends too many hours a day on social media will recognise that portrait — if they have not already come to resemble it.

Duncan’s arbitration of this argument is a wry one. Scholarship, rather than being timeless and immutable, “is shifting and contingent,” he says, and the questions we ask of our texts “have a lot to do with the tools at our disposal.”

Russian enlightenment

Attending Russia’s top non-fiction awards for the TLS, 11 December 2019

Founded in 2008, the Enlightener awards are modest by Western standards. The Russian prize is awarded to writers of non-fiction, and each winner receives seven million rubles – just over £8,500. This year’s ceremony took place last month at Moscow’s School Of Modern Drama, and its winners included Pyotr Talantov for his book exploring the distinction between modern medicine and its magical antecedents, and Elena Osokina for a work about the state stores that sold food and goods at inflated prices in exchange for foreign currency, gold, silver and diamonds. But the organizer’s efforts also extend to domestic and foreign lecture programmes, festivals and competitions. And at this year’s ceremony a crew from TV Rain (or Dozhd, an independent channel) was present, as journalists and critics mingled with researchers in medicine and physics, who had come to show support for the Zimin Foundation which is behind the prizes.

The Zimin Foundation is one of those young–old organizations whose complex origin story reflects the Russian state’s relationship with its intelligentsia. It sprang up to replace the celebrated and influential Dynasty Foundation, whose work was stymied by legal controversy in 2015. Dynasty had been paying stipends to young biologists, physicists and mathematicians: sums just enough that jobbing scientists could afford Moscow rents. The scale of the effort grabbed headlines. Its plan for 2015 – the year it fell foul of the Russian government – was going to cost it 435 million rubles: around £5.5 million.

The Foundation’s money came from Dimitry Zimin’s sale, in 2001, of his controlling stake in VimpelCom, Russia’s second-largest telecoms company.  Raised on non-fiction and popular science, Zimin (pictured) decided to use the money to support young researchers. (“It would be misleading to claim that I’m driven by some noble desire to educate humankind”, he remarked in a 2013 interview. “It’s just that I find it exciting.”)

As a child, Zimin had sought escape in the Utopian promises of science. And no wonder: when he was two, his father was killed in a prison camp near Novosibirsk. A paternal uncle was shot three years later, in 1938. He remembers his mother arguing for days with neighbours in their communal apartment about who was going to wash the floors, or where to store luggage. It was so crowded that when his mother remarried, Dmitry barely noticed. In 1947, Eric Ashby, the Australian Scientific Attaché to the USSR, claimed “it can be said without fear of contradiction that nowhere else in the world, not even in America, is there such a widespread interest in science among the common people”. “Science is kept before the people through newspapers, books, lectures, films, exhibitions in parks and museums, and through frequent public festivals in honour of scientists and their discoveries. There is even an annual ‘olympiad’ of physics for Moscow schoolchildren.” Dimitry Zimin was firmly of this generation.

Then there were books, the “Scientific Imaginative Literature” whose authors had a section all of their own at the Praesidium of the Union of Soviet Writers. Romances about radio. Thrillers about industrial espionage. Stirring adventure stories about hydrographic survey missions to the arctic. The best of these science writers won lasting reputations in the West. In 1921 Alexander Oparin had the bold new idea that life resulted from non-living processes; The Origin of Life came out in English translation in New York in 1938. Alexander Luria’s classic neuropsychological case study The Mind of a Mnemonist described the strange world of a client of his, Solomon Shereshevsky, a man with a memory so prodigious it ruined his life. An English translation first appeared in 1960 and is still in print.

By 2013 Zimin, at the age of eighty, was established as one of the world’s foremost philanthropists, a Carnegie Trust medalist like Rockefeller and the Gateses, George Soros and Michael Bloomberg. But that is a problem in a country where the leaders fear successful businesspeople. In May 2015, just two months after Russia’s minister of education and science, Dmitry Livanov, presented Zimin with a state award for services to science, the Dynasty Foundation was declared a “foreign agent”. “So-called foreign funds work in schools, networks move about schools in Russia for many years under the cover of supporting talented youth”, complained Vladimir Putin, in a speech in June 2015. “Actually they are just sucking them up like a vacuum cleaner.” Never mind that Dynasty’s whole point was to encourage homegrown talent to return. (According to the Association of Russian-Speaking Scientists, around 100,000 Russian-speaking researchers work outside the country.)

Dynasty was required to put a label on their publications and other materials to the effect that they received foreign funding. To lie, in other words. “Certainly, I will not spend my own money acting under the trademark of some unknown foreign state”, Zimin told the news agency Interfax on May 26. “I will stop funding Dynasty.” But instead of stopping his funding altogether, Zimin founded a new foundation, which took over Dynasty’s programmes, including the Enlighteners. Constituted to operate internationally, it is a different sort of beast. It does not limit itself to Russia. And on the Monday following this year’s Enlightener awards it announced a plan to establish new university laboratories around the world. The foundation already has scientific projects up and running in New York, Tel Aviv and Cyprus, and cultural projects at Tartu University in Estonia and in London, where it supports Polity Press’s Russian translation programme.

In Russia, meanwhile, history continues to repeat itself.  In July 2019 the Science and Education Ministry sent a list of what it later called “recommendations” to the institutions it controls. The ministry should be notified in detail of any planned meetings with foreigners and provide the names. At least two Russian researchers must be present at any meeting with foreigners. Contact with foreigners outside work hours is only allowed with a supervisor’s permission. Details of any after-hours contact must be summarized, along with copies of the participants’ passports. This doesn’t just echo the Soviet limits on international communication. It copies them, point by point.

In Soviet times, of course, many scientists and engineers lived in golden cages, enjoying unprecedented social status. But with the Soviet collapse in 1991 came a readjustment in political values that handed the industrial sector to speculators, while leaving experts and technicians without tenure, without prospects; above all, without salaries.

The wheel will keep turning, of course. In 2018 Putin promised that science and innovation were now his top priorities. And things are improving: research and development now receives 1 per cent of the country’s GDP. But Russia has a long way to go to recover its scientific standing, and science does poorly in a politically isolated country. The Enlighteners – Russia’s only major award for non-fiction – are as much an attempt to create a civic space for science as they are a celebration of a genre that has powered Russian dreaming for over a hundred years.

“A wonderful moral substitute for war”

Reading Oliver Morton’s The Moon and Robert Stone and Alan Adres’s Chasing the Moon for The Telegraph, 18 May 2019

I have Arthur to thank for my earliest memory: being woken and carried into the living room on 20 July 1969 to see Neil Armstrong set foot on the moon.

Arthur is a satellite dish, part of the Goonhilly Earth Satellite Station in Cornwall. It carried the first ever transatlantic TV pictures from the USA to Europe. And now, in a fit of nostalgia, I am trying to build a cardboard model of the thing. The anniversary kit I bought comes with a credit-card sized Raspberry Pi computer that will cause a little red light to blink at the centre of the dish, every time the International Space Station flies overhead.

The geosychronous-satellite network that Arthur Clarke envisioned in 1945 came into being at the same time as men landed on the Moon. Intelsat III F-3 was moved into position over the Indian Ocean a few days before Apollo 11’s launch, completing the the world’s first geostationary-satellite network. The Space Race has bequeathed us a world steeped in fractured televisual reflections of itself.

Of Apollo itself, though, what actually remains? The Columbia capsule is touring the United States: it’s at Seattle’s Museum of Flight for this year’s fiftieth anniversary. And Apollo’s Mission Control Center in Houston is getting a makeover, its flight control consoles refurbished, its trash cans, book cases, ashtrays and orange polyester seat cushions all restored.

On the Moon there are some flags; some experiments, mostly expired; an abandoned car.

In space, where it matters, there’s nothing. The intention had been to build moon-going craft in orbit. This would have involved building a space station first. In the end, spooked by a spate of Soviet launches, NASA decided to cut to the chase, sending two small spacecraft up on a single rocket. One got three astronauts to the moon. The other, a tiny landing bug (standing room only) dropped two of them onto the lunar surface and puffed them back up into lunar orbit, where they rejoined the command module and headed home. It was an audacious, dangerous and triumphant mission — but it left nothing useful or reuseable behind.

In The Moon: A history for the future, science writer Oliver Morton observes that without that peculiar lunar orbital rendezvous plan, Apollo would at least have left some lasting infrastructure in orbit to pique someone’s ambition. As it was, “Every Apollo mission would be a single shot. Once they were over, it would be in terms of hardware — even, to a degree, in terms of expertise — as if they had never happened.”

Morton and I belong to the generation sometimes dubbed Apollo’s orphans. We grew up (rightly) dazzled by Apollo’s achievement. It left us, however, with the unshakable (and wrong) belief that our enthusiasm was common, something to do with what we were taught to call humanity’s “outward urge”. The refrain was constant: how in people there was this inborn desire to leave their familiar surroundings and explore strange new worlds.

Nonsense. Over a century elapsed between Columbus’s initial voyage and the first permanent English settlements. One of the more surprising findings of recent researches into the human genome is that, left to their own devices, people hardly move more than a few weeks’ walking distance from where they were born.

This urge, that felt so visceral, so essential to one’s idea of oneself: how could it possibly turn out to be the psychic artefact of a passing political moment?

Documentary makers Robert Stone and Alan Andres answer that particular question in Chasing the Moon, a tie in to their forthcoming series on PBS. It’s a comprehensive account of the Apollo project, and sends down deep roots: to the cosmist speculations of fin de siecle Russia, the individualist eccentricities of Germanys’ Verein fur Raumschiffart (Space Travel Society), and the deceptively chummy brilliance of the British Interplanetary Society, who used to meet in the pub.

The strength of Chasing the Moon lies not in any startling new information it divulges (that boat sailed long ago) but in the connections it makes, and the perspectives it brings to bear. It is surprising to find the New York Times declaring, shortly after the Bay of Pigs fiasco, that Kennedy isn’t nearly as interested in building a space programme as he should be. (“So far, apparently, no one has been able to persuade President Kennedy of the tremendous political, psychological, and prestige importance, entirely apart from the scientific and military results, of an impressive space achievement.”) And it is worthwhile to be reminded that, less than a month after his big announcement, Kennedy was trying to persuade Khrushchev to collaborate on the Apollo project, and that he approached the Soviets with the idea a second time, just days before his assassination in Dallas.

For Kennedy, Apollo was a strategic project, “a wonderful moral substitute for war ” (to slightly misapply Ray Bradbury’s phrase), and all to do with manned missions. NASA administrator James Webb, on the other hand, was a true believer. He could see no end to the good big organised government projects could achieve by way of education and science and civil development. In his modesty and dedication, Webb resembled no-one so much as the first tranche of bureaucrat-scientists in the Soviet Union. He never featured on a single magazine cover, and during his entire tenure he attended only one piloted launch from Cape Kennedy. (“I had a job to do in Washington,” he explained.)

The two men worked well enough together, their priorities dovetailing neatly in the role NASA took in promoting the Civil Rights Act and the Voting Rights Act and the government’s equal opportunities program. (NASA’s Saturn V designer, the former Nazi rocket scientist Wernher Von Braun, became an unlikely and very active campaigner, the New York Times naming him “one of the most outspoken spokesmen for racial moderation in the South.”) But progress was achingly slow.

At its height, the Apollo programme employed around two per cent of the US workforce and swallowed four per cent of its GDP. It was never going to be agile enough, or quotidian enough, to achieve much in the area of effecting political change. There were genuine attempts to recruit and train a black pilot for the astronaut programme. But comedian Dick Gregory had the measure of this effort: “A lot of people was happy that they had the first Negro astronaut, Well, I’ll be honest with you, not myself. I was kind of hoping we’d get a Negro airline pilot first.”

The big social change the Apollo program did usher in was television. (Did you know that failing to broadcast the colour transmissions from Apollo 11 proved so embarrassing to the apartheid government in South Africa that they afterwards created a national television service?)

But the moon has always been a darling of the film business. Never mind George Melie’s Trip to the Moon. How about Fritz Lang ordering a real rocket launch for the premiere of Frau im Mond? This was the film that followed Metropolis, and Lang roped in no less a physicist than Hermann Oberth to build it for him. When his 1.8-metre tall liquid-propellant rocket came to nought, Oberth set about building one eleven metres tall powered by liquid oxygen. They were going to launch it from the roof of the cinema. Luckily they ran out of money.

The Verein für Raumschiffahrt was founded by men who had acted as scientific consultants on Frau im Mond. Von Braun became one of their number, before he was whisked away by the Nazis to build rockets for the war effort. Without Braun, the VfR grew nuttier by the year. Oberth, who worked for a time in the US after the war, went the same way, his whole conversation swallowed by UFOs and extraterrestrials and glimpses of Atlantis. When he went back to Germany, no-one was very sorry to see him go.

What is it about dreaming of new worlds that encourages the loner in us, the mooncalf, the cave-dweller, wedded to ascetism, always shying from the light?

After the first Moon landing, the philosopher (and sometime Nazi supporter) Martin Heidegger said in interview, “I at any rate was frightened when I saw pictures coming from the moon to the earth… The uprooting of man has already taken place. The only thing we have left is purely technological relationships. This is no longer the earth on which man lives.”

Heidegger’s worries need a little unpacking, and for that we turn to Morton’s cool, melancholy The Moon: A History for the Future. Where Stone and Anders collate and interpret, Morton contemplates and introspects. Stone and Anders are no stylists. Morton’s flights of informed fancy include a geological formation story for the moon that Von Trier’s film Melancholy cannot rival for spectacle and sentiment.

Stone and Anders stand with Walter Cronkite whose puzzled response to young people’s opposition to Apollo — “How can anybody turn off from a world like this?” — stands as an epitaph for Apollo’s orphans everywhere. Morton, by contrast, does understand why it’s proved so easy for us to switch off from the Moon. At any rate he has some good ideas.

Gertrude Stein, never a fan of Oakland, once wrote of the place, “There is no there there.” If Morton’s right she should have tried the Moon, a place whose details “mostly make no sense.”

“The landscape,” Morton explains, “may have features that move one into another, slopes that become plains, ridges that roll back, but they do not have stories in the way a river’s valley does. It is, after all, just the work of impacts. The Moon’s timescape has no flow; just punctuation.”

The Moon is Heidegger’s nightmare realised. It can never be a world of experience. It can only be a physical environment to be coped with technologically. It’s dumb, without a story of its own to tell, so much “in need of something but incapable of anything”, in Morton’s telling phrase, that you can’t even really say that it’s dead.

So why did we go there, when we already knew that it was, in the words of US columnist Milton Mayer, a “pulverised rubble… like Dresden in May or Hiroshima in August”?

Apollo was the US’s biggest, brashest entry in its heart-stoppingly exciting – and terrifying – political and technological competition with the Soviet Union. This is the matter of Stone and Anders’s Chasing the Moon, as a full a history as one could wish for, clear-headed about the era and respectful of the extraordinary efforts and qualities of the people involved.

But while Morton is no less moved by Apollo’s human adventure, we turn to his book for a cooler and more distant view. Through Morton’s eyes we begin to see, not only what the moon actually looks like (meaningless, flat, gentle, a South Downs gone horribly wrong) but why it conjures so much disbelief in those who haven’t been there.

A year after the first landing the novelist Norman Mailer joked: “In another couple of years there will be people arguing in bars about whether anyone even went to the Moon.” He was right. Claims that the moon landing were fake arose the moment the Saturn Vs stopped flying in 1972, and no wonder. In a deep and tragic sense Apollo was fake, in the sense that it didn’t deliver the world it had promised.

And let’s be clear here: the world it promised would have been wonderful. Never mind the technology: that was never the core point. What really mattered was that at the height of the Vietnam war, we seemed at last to have found that wonderful moral substitute for war. “All of the universe doesn’t care if we exist or not,” Ray Bradbury wrote, “but we care if we exist… This is the proper war to fight.”

Why has space exploration not united the world around itself? It’s easy to blame ourselves and our lack of vision. “It’s unfortunate,” Lyndon Johnson once remarked to the astronaut Wally Schirra, “but the way the American people are, now that they have developed all of this capability, instead of taking advantage of it, they’ll probably just piss it all away…” This is the mordant lesson of Stone and Andres’s otherwise uplifting Chasing the Moon.

Oliver Morton’s The Moon suggests a darker possibility: that the fault lies with the Moon itself, and, by implication, with everything that lies beyond our little home.

Morton’s Moon is a place defined by absences, gaps, and silence. He makes a poetry of it, for a while, he toys with thoughts of future settlement, he explores the commercial possibilities. In the end, though, what can this uneventful satellite of ours ever possibly be, but what it is: “just dry rocks jumbled”?

 

 

The three-dimensional page

Visiting Thinking 3D: Leonardo to the present at Oxford’s Weston Library for the Financial Times, 20 March 2019

Exhibitions hitch themselves to the 500th anniversary of Leonardo da Vinci at their peril. How do you do justice to a man whose life’s work provides the soundtrack to your entire culture? Leonardo dabbled his way into every corner of intellectual endeavour, and carved out several tasty new corners into the bargain. For heaven’s sake, he dreamt up a glass vessel to demonstrate the dynamics of fluid flow in the aortic valve of the human heart: modern confirmation that he was right (did you doubt it?) had to wait for the cardiologist Robin Choudhury and a paper written in 2014.

Daryl Green and Laura Moretti, curators of Thinking 3D at Oxford’s Weston Library, are wise to park this particular story at the far end of their delicate, nuanced, spiderweb of an exhibition into how artists and scientists, from Leonardo to now, have learned to convey three-dimensional objects on the page.

Indeed they do very good job of keeping You Know Who contained. This is a show made up of books, mostly, and Leonardo came too soon to take full advantage of print. He was, anyway, far too jealous of his own work to consign it to the relatively crude reproductive technologies of his day. Only one of his drawings exists in printed form — a stellated dodecahedron, drawn for his friend Luca Pacioli’s De Divina Proportione of 1509. It’s here for the viewing, alongside other contemporary attempts at geometrical drawing. Next to Leonardo, they are hardly more than doodles.

A few of Leonardo’s actual drawings — the revolving series here is drawn from the Royal Collection and the British Library — served to provoke, more than to inspire, the advances in 3D visualisation that followed. In a couple of months the aortic valve story will be pulled from the show, its place taken by astrophysicist Steven Balbus’s attempts to visualise black holes. (There’s a lot of ground to cover, and very little room, so the exhibition will be changing some elements regularly during the run.) When that happens, will Leonardo’s presence in this exhibition begin to feel gratuitous? Probably not: Leonardo is the ultimate Man Who Came to Dinner: once put inside your head there’s no getting rid of him.

Thinking 3D is more than just this exhibition: the year-long project promises events, talks, conferences and workshops, not to mention satellite shows. (Under the skin: illustrating the human body, which just ended at the Royal College of Physicians in London, was one of these.) The more one learns about the project, the more it resembles Stephen Leacock’s Lord Ronald, who flung himself upon his horse and rode madly off in all directions — and the more impressive the coherence Green and Moretti have achieved here.

There are some carefully selected geegaws. A stereoscope through which one can study Arthur Thomson stereographic Anatomy of the Human Eye, published in 1912. The nation’s first look at Bill Gates’s Codescope, an interactive kiosk with a touch screen that lets you explore the Codex Leicester, a notebook of Leonardo’s that Gates bought in 1994. Even a shelf full of 3D-printed objects you are welcome to fondle, like Linus with his security blanket, as you wander around the exhibition. This last jape works better than you’d think: by relating vision to touch, it makes us properly aware of all the mental tricks we have to perform, in order to to realise 3D forms in pictures.

But books are the meat of the matter: arranged chronologically along one wall, and under glass in displays that show how the same theme has been handled at different times. Start at the clean, complex lines of the dodecahedron and pass, via architecture (the coliseum) and astronomy (the Moon) to the fleshy ghastliness of the human eyeball.

Conveying depth by drawing makes geometry comprehensible. It also, and in particular, transforms three areas of fundamental intellectual enquiry: anatomy, architecture, and astronomy.

Today, when we think of 3D visualisation, we think first of architecture. (It’s an association forged, in large part, in the toils of countless videogames: never mind the plot, gawp at all that visionary pixelcrete!). But because architecture operates at a more-or-less human-scale, it’s actually been rather slow to pick up on the power of 3D visualisation. With intuition and craft skill to draw upon, who needs axonometry? The builders of the great Mediaeval cathedrals managed quite happily without any such hifalutin drawing techniques, and it wasn’t until Auguste Choisy’s Histoire de l’architecture of 1899 that a drawing style that had already transformed carpentry, machinery, and military architecture finally found favour with architects. (Arguably, the profession has yet to come down off the high this occasioned. Witness the number of large buildings that look, for all their bulk, like scale models, their shapes making sense only from the most arbitrary angles.)

Where the scale is too small or too large for intuition and common sense to work, 3D visualisation has been most useful, and most beautiful. Andreas Vesalius’s De humani corporis fabrica librorum epitome (1543) stands here for an entire genre of “fugitive sheets” — compendiums of exquisite anatomical drawings with layered flaps, peeled back by the reader to reveal the layers of the body as one might discover them during a dissection. Because these documents were practical surgical guides, they received rough treatment, and hardly any survive. Those that do (though not the one here, thank God) are often covered with mysterious stains.

Less gruesome, but at the same time less immediately communicative, are the various attempts here to render the cosmos on paper. Robert Fludd’s black square from his Utriusque Cosmi (1617-21), depicts the void immediately prior to creation. Et sic in infinitum (“And so on to infinity”) run the words on each side of this eloquent blank.

Thinking 3D explores territories where words tangle incoherently and only pictures will suffice — then leaps giggling into a void where rational enquiry collapses and only artworks and acts of mischief like Fludd’s manage to convey anything at all. All this in a space hardly bigger than two average living rooms. It’s a show that repays — indeed, demands — patience. Put in the requisite effort, though, and you’ll find it full of wonders.

A world that has run out of normal

Reading The Uninhabitable Earth: A Story of the Future by David Wallace-Wells for the Telegraph, 16 February 2019

As global temperatures rise, and the mean sea-level with them, I have been tracing the likely flood levels of the Thames Valley, to see which of my literary rivals will disappear beneath the waves first. I live on a hill, and what I’d like to say is: you’ll be stuck with me a while longer than most. But on the day I had set aside to consume David Wallace-Wells’s terrifying account of climate change and the future of our species (there isn’t one), the water supply to my block was unaccountably cut off.

Failing to make a cup of tea reminded me, with some force, of what ought to be obvious: that my hill is a post-apocalyptic death-trap. I might escape the floods, but without clean water, food or power, I’ll be lucky to last a week.

The first half of The Uninhabitable Earth is organised in chapters that deal separately with famines, floods, fires, droughts, brackish oceans, toxic winds and war and all the other manifest effects of anthropogenic climate change (there are many more than four horsemen in this Apocalypse). At the same time, the author reveals, paragraph by paragraph, how these ever-more-frequent disasters join up in horrific cascades, all of which erode human trust to the point where civic life collapses.

The human consequences of climate disaster are going to be ugly. When a million refugees from the Syrian civil war started arriving in Europe in 2017, far-right parties entered mainstream political discourse for the first time in decades. By 2050, the United Nations predicts that Europe will host 200 million refugees. So buckle up. The disgust response with which we greet strangers on our own land is something we conscientiously suppress these days. But it’s still there: an evolved response that in less sanitary times got us through more than one plague.

That such truths go largely unspoken says something about the cognitive dissonance in which our culture is steeped. We just don’t have the mental tools to hold climate change in our heads. Amitav Ghosh made this clear enough in The Great Derangement (2016), which explains why the traditional novel is so hopeless at handling a world that has run out of normal, forgotten how to repeat itself, and will never be any sort of normal again.

Writers, seeking to capture the contemporary moment, resort to science fiction. But the secret, sick appeal of post-apocalyptic narratives, from Richard Jefferies’s After London on, is that in order to be stories at all their heroes must survive. You can only push nihilism so far. J G Ballard couldn’t escape that bind. Neither could Cormac McCarthy. Despite our most conscientious attempts at utter bloody bleakness, the human spirit persists.

Wallace-Wells admits as much. When he thinks of his own children’s future, denizens of a world plunging ever deeper into its sixth major extinction event, he admits that despair melts and his heart fills with excitement. Humans will cling to life on this ever less habitable earth for as long as they can. Quite right, too.

Wallace-Wells is deputy editor of New York magazine. In July 2017 he wrote a cover story outlining worst-case scenarios for climate change. His pessimism proved salutary: The Uninhabitable Earth has been much anticipated.

In the first half of the book the author channels former US vice-president Al Gore, delivering a blizzard of terrifying facts, and knocking socks off his predecessor’s An Inconvenient Truth (2006) not thanks to his native gifts (considerable as they are) but because the climate has deteriorated since then to the point where its declines can now be observed directly, and measured over the course of a human lifetime.

More than half the extra carbon dioxide released into the atmosphere by burning fossil fuels has been added in the past 30 years. This means that “we have done as much damage to the fate of the planet and its ability to sustain human life and civilization since Al Gore published his first book on climate than in all the centuries – all the millennia – that came before.” (4) Oceans are carrying at least 15 per cent more heat energy than they did in 2000. 22 per cent of the earth’s landmass was altered by humans just between 1992 and 2015. In Sweden, in 2018, forests in the Arctic Circle went up in flames. On and on like this. Don’t shoot the messenger, but “we have now engineered as much ruin knowingly as we ever managed in ignorance.”

The trouble is not that the future is bleak. It’s that there is no future. We’re running out of soil. In the United States, it’s eroding ten times faster than it is being replaced. In China and India, soil is disappearing thirty to forty times as fast. Wars over fresh water have already begun. The CO2 in the atmosphere has reduced the nutrient value of plants by about thirty per cent since the 1950s. Within the lifetimes of our children, the hajj will no longer be a feature of Islamic practice: the heat in Mecca will be such that walking seven times counterclockwise around the Kaaba will kill you.

This book may come to be regarded as last truly great climate assessment ever made. (Is there even time left to pen another?) Some of the phrasing will give persnickety climate watchers conniptions. (Words like “eventually” will be a red rag for them, because they catalyse the reader’s imagination without actually meaning anything.) But the research is extensive and solid, the vision compelling and eminently defensible.

Alas, The Uninhabitable Earth is also likely to be one of the least-often finished books of the year. I’m not criticising the prose, which is always clear and engaging and often dazzling. But It’s simply that the more we are bombarded with facts, the less we take in. Treating the reader like an empty bucket into which facts may be poured does not work very well, and even less well when people are afraid of what you are telling them. “If you have made it this far, you are a brave reader,” Wallace Wells writes on page 138. Many will give up long before then. Climate scientists have learned the hard way how difficult it is to turn fact into public engagement.

The second half of The Uninhabitable Earth asks why our being made aware of climate disaster doesn’t lead to enough reasonable action being taken against it. There’s a nuanced mathematical account to be written of how populations reach carrying capacity, run out of resources, and collapse; and an even more difficult book that will explain why we ever thought human intelligence would be powerful enough to elude this stark physical reality.

The final chapters of The Uninhabitable Earth provide neither, but neither are they narrowly partisan. Wallace-Wells mostly resists the temptation to blame the mathematical inevitability of our species’ growth and decline on human greed. The worst he finds to say about the markets and market capitalism – our usual stock villains – is not that they are evil, or psychopathic (or certainly no more evil or psychopathic than the other political experiments we’ve run in the past 150 years) but that they are not nearly as clever as we had hoped they might be. There is a twisted magnificence in the way we are exploiting, rather than adapting to the End Times. (Whole Foods in the US, we are told, is now selling “GMO-free” fizzy water.)

The Paris accords of 2016 established keeping warming to just two degrees as a global goal. Only a few years ago we were hoping for a rise of just 1.5 degrees. What’s the difference? According to the IPCC, that half-degree concession spells death for about 150 million people. Without significantly improved pledges, however, the IPCC reckons that instituting the Paris accords overnight (and no-one has) will still see us topping 3.2 degrees of warming. At this point the Antarctic’s ice sheets will collapse, drowning Miami, Dhaka, Shanghai, Hong Kong and a hundred other cities around the world. (Not my hill, though.)

And to be clear: this isn’t what could happen. This is what is already guaranteed to happen. Greenhouse gases work on too long a timescale to avoid it. “You might hope to simply reverse climate change;” writes Wallace-Wells: “you can’t. It will outrun all of us.”

“How widespread alarm will shape our ethical impulses toward one another, and the politics that emerge from those impulses,” says Wallace-Wells,”is among the more profound questions being posed by the climate to the planet of people it envelopes.”

My bet is the question will never tip into public consciousness: that, on the contrary, we’ll find ways, through tribalism, craft and mischief, to engineer what Wallace-Wells dubs “new forms of indifference”, normalising climate suffering, and exploiting novel opportunities, even as we live and more often die through times that will never be normal again.

 A History of Silence reviewed: Unlocking the world of infinitely small noises

Reading Alain Corbin’s A History of Silence (Polity Press) for The Telegraph, 3 September 2018

The Orientalist painter Eugene Fromentin adored the silence of the Sahara. “Far from oppressing you,” he wrote to a friend, “it inclines you to light thoughts.” People assume that silence, being an absence of noise, is the auditory equivalent of darkness. Fromentin was having none of it: “If I may compare aural sensations to those of sight, then silence that reigns over vast spaces is more a sort of aerial transparency, which gives greater clarity to the perceptions, unlocks the world of infinitely small noises, and reveals a range of inexpressible delights.” (26-27)

Silence invites clarity. Norwegian explorer and publisher Erling Kagge seized on this nostrum for his international bestseller Silence in the Age of Noise, published in Norway in 2016, the same year Alain Corbin’s A History of Silence was published in France.

People forget this, but Kagge’s short, smart airport read was more tough-minded than the fad it fed. In fact, Kagge’s crowd-pleasing Silence and Corbin’s erudite History make surprisingly good travelling companions.

For instance: while Corbin was combing through Fromentin’s Un été dans le Sahara of 1856, Kagge was talking to his friend the artist Marina Abramovic, whose experience of desert silence was anything but positive: “Despite the fact that everything end completely quiet around her, her head was flooded with disconnected thoughts… It seemed like an empty emptiness, while the goal is to experience a full emptiness, she says.” (115)

Abramovic’s trouble, Kagge tells us, was that she couldn’t stop thinking. She wanted a moment of Fromentinesque clarity, but her past and her future proved insuperable obstacles: the moment she tore are mind away from one, she found herself ruminating over the other.

It’s a common complaint, according to Kagge: “The present hurts, and our response is to look ceaselessly for fresh purposes that draw our attention outwards, away from ourselves.” (37)

Why should this be? The answer is explored in Corbin’s book, which is one of those cultural histories which comes very close to becoming a virtually egoless compendium of quotations. Books of this sort can be a terrible mess but Corbin’s architecture, on the contrary, is as stable as it is artfully concealed. This is a temple: not a pile.

The present, properly attended to, alone and in silence, reveals time’s awful scale. When we think about the past or the future, what we’re actually doing is telling ourselves stories. It’s in the present moment, if we dare attend to it, that we glimpse the Void.

Jules Michelet, in his book La Montaigne (1872), recognised that the great forces shaping human destiny are so vast as to be silent. The process of erosion, for example, “is the more successfully accomplished in silence, to reveal, one morning, a desert of hideous nakedness, where nothing shall ever again revive.” The equivalent creative forces are hardly less awful: the “silent toil of the innumerably polyps” of a coral reef, for example, creating “the future Earth; on whose surface, perhaps, Man shall hereafter reside”.

No wonder so many of us believe in God, when sitting alone in a quiet room for ten minutes confronts us with eternity. The Spanish Basque theologian Ignatius Loyola used to spend seven solid hours a day in silent prayer and came to the only possible conclusion: silence “cancels all rational and discursive activity, thereby enabling direct perception of the divine word.”

“God bestows, God trains, God accomplishes his work, and this can only be done in the silence that is established between the Creator and the creature.” (42-43)

Obvious as this conclusion may be, though, it could still be wrong. Even the prophet Isaiah complained: “Verily thou art a God that hides thyself”.

What if God’s not there? What if sound and fury were our only bulwark against the sucking vacuum of our own meaninglessness? It’s not only the empty vessels that make the most noise: “Men with subtle minds who are quick to see every side of a question find it hard to refrain from expressing what they think,” said Eugene Delacroix. Anyway, “how is it possible to resist giving a favourable idea of one’s mind to a man who seems surprised and pleased to hear what one is saying?”

There’s a vibrancy in tumult, a civic value in conversation, and silence is by no means always golden — a subject Corbin explores ably and at length, carrying us satisfyingly far from our 2016-vintage hygge-filled comfort zone.

Silence suggests self-control, but self-control can itself be a response to oppression. Advice to courtiers to remain silent, or at least ensure that their words are more valuable than their silence, may at worst suggest a society where there is no danger in keeping quiet, but plenty of danger in speaking.

This has certainly been the case historically among peasants, immersed as they are in a style of life that has elements of real madness about it: an undercurrent of constant hate and bitterness expressed in feuding, bullying, bickering and family quarrels, the petty mentality, the self-deprecation, the superstition, the obsessive control of daily life by a strict authoritarianism. If you don’t believe me, read Zola. His peasant silences are “first and foremost a tactic” in a milieu where plans, ambitious or tragic, “were slow to come to fruition, which meant that it was essential not to show your hand.” (93)

Corbin’s history manages to anatomise silence without fetishising it. He leaves you with mixed feelings: a neat trick to pull, in a society that’s convinced it’s drowning in noise.

It’s not true. In Paris, Corbin reminds us, forges once operated on the ground floors of buildings throughout the city. Bells of churches, convents, schools and colleges only added to the cacophony. Carriages made the level of street noise even more deafening. (61) I was reminded, reading this, of how the Victorian computer pioneer Charles Babbage died, at the age of 79, at his home in Marylebone. Never mind the urinary tract infection: his last hours were spent being driven to distraction by itinerant hurdy-gurdy players pan-handling outside his window.

If anything, contemporary society suffers from too much control of its auditory environment. Passengers travelling in trains and trams observe each other in silence; a behaviour that was once considered damn rude. Pedestrians no longer like to be greeted. Among the many silences we may welcome, comes one that we surely must deplore: the silence that accompanies the diminution of our civic life.

Pushing the boundaries

Rounding up some cosmological pop-sci for New Scientist, 24 March 2018

IN 1872, the physicist Ludwig Boltzmann developed a theory of gases that confirmed the second law of thermodynamics, more or less proved the existence of atoms and established the asymmetry of time. He went on to describe temperature, and how it governed chemical change. Yet in 1906, this extraordinary man killed himself.

Boltzmann is the kindly if gloomy spirit hovering over Peter Atkins’s new book, Conjuring the Universe: The origins of the laws of nature. It is a cheerful, often self-deprecating account of how most physical laws can be unpacked from virtually nothing, and how some constants (the peculiarly precise and finite speed of light, for example) are not nearly as arbitrary as they sound.

Atkins dreams of a final theory of everything to explain a more-or-less clockwork universe. But rather than wave his hands about, he prefers to clarify what can be clarified, clear his readers’ minds of any pre-existing muddles or misinterpretations, and leave them, 168 succinct pages later, with a rather charming image of him tearing his hair out over the fact that the universe did not, after all, pop out of nothing.

It is thanks to Atkins that the ideas Boltzmann pioneered, at least in essence, can be grasped by us poor schlubs. Popular science writing has always been vital to science’s development. We ignore it at our peril and we owe it to ourselves and to those chipping away at the coalface of research to hold popular accounts of their work to the highest standards.

Enter Brian Clegg. He is such a prolific writer of popular science, it is easy to forget how good he is. Icon Books is keeping him busy writing short, sweet accounts for its Hot Science series. The latest, by Clegg, is Gravitational Waves: How Einstein’s spacetime ripples reveal the secrets of the universe.

Clegg delivers an impressive double punch: he transforms a frustrating, century-long tale of disappointment into a gripping human drama, affording us a vivid glimpse into the uncanny, depersonalised and sometimes downright demoralising operations of big science. And readers still come away wishing they were physicists.

Less polished, and at times uncomfortably unctuous, Catching Stardust: Comets, asteroids and the birth of the solar system is nevertheless a promising debut from space scientist and commentator Natalie Starkey. Her description of how, from the most indirect evidence, a coherent history of our solar system was assembled, is astonishing, as are the details of the mind-bogglingly complex Rosetta mission to rendezvous with comet 67P/Churyumov-Gerasimenko – a mission in which she was directly involved.

It is possible to live one’s whole life within the realms of science and discovery. Plenty of us do. So it is always disconcerting to be reminded that longer-lasting civilisations than ours have done very well without science or formal logic, even. And who are we to say they afforded less happiness and fulfilment than our own?

Nor can we tut-tut at the way ignorant people today ride science’s coat-tails – not now antibiotics are failing and the sixth extinction is chewing its way through the food chain.

Physicists, especially, find such thinking well-nigh unbearable, and Alan Lightman speaks for them in his memoir Searching for Stars on an Island in Maine. He wants science to rule the physical realm and spirituality to rule “everything else”. Lightman is an elegant, sensitive writer, and he has written a delightful book about one man’s attempt to hold the world in his head.

But he is wrong. Human culture is so rich, diverse, engaging and significant, it is more than possible for people who don’t give a fig for science or even rational thinking to live lives that are meaningful to themselves and valuable to the rest of us.

“Consilience” was biologist E.O. Wilson’s word for the much-longed-for marriage of human enquiry. Lightman’s inadvertent achievement is to show that the task is more than just difficult, it is absurd.