Salmon or seals?

Reading Rebecca Nesbit’s Tickets for the Ark for New Scientist, 9 February 2022

Imagine: you are the last person alive. On your dying day, you cut down the last remaining oak tree, just because you can. Are you morally wrong?

Rebecca Nesbit, a science writer who trained as an ecologist, reports from fields where scientific knowledge and moral intuition trip over each other in disconcerting, sometimes headline-generating ways. Her first book, published in 2017, was Is that Fish in your Tomato? exploring the benefits and risks of genetically modified foods.

In Tickets for the Ark, Nesbit explores the moral complexities of conservation. If push came to shove, and their extinction were imminent, would you choose to preserve bison or the Siberian larch; yellowhammers or Scottish crossbills; salmon or seals? Are native species more important than invasive species? Do animals matter for their charisma, or their edibility? Are we entitled to kill some animals to make room for others?

Working through these and other issues, Nesbit shows how complex and problematic conservation can be. In particular, she draws attention to the way we focus our efforts on the preservation of species. This, she points out, is really just a grand way of saying that we preserve what we can easily see. For the sake of preserving the planet’s biodiversity, we might as easily focus on genes, or on individual strings of DNA, or the general shape of whole ecosystems.

Tickets for the Ark could be read as a catalogue of understandable blunders. We have attempted to limit the spread of invasive species, only to discover that many indigenous species are long-established immigrants. We have attempted to reverse human interference in nature, only to find that life has been shaping the Earth’s geology for about 2.5 billion years.

Far from being a counsel of despair, though, Tickets for the Ark reveals the intellectual vistas those blunders have opened up.

Even supposing it ever existed, we know now that we cannot return to some prelapsarian Eden. All we can do is learn how natural systems change (sometimes under human influence, sometimes not) and use this information to shape our present world according to our values.

In a sense, of course, we have always been doing this. What is agriculture, if not a way of shaping of the land to our demands? At least now, having learned to feed ourselves, we might move on to realise some higher ideals.

Once we accept that “nature” is a human and social idea, and that conservation is about the future, not the past, then most of conservation’s most troubling conundrums and contradictions fall away. The death of the last oak, at the hands of the last human, becomes merely the loss of a category (oak tree) that was defined and valued by humans; a loss that was at some point inevitable anyway. And though this conclusion is counterintuitive and uncomfortable, Nesbit argues that it should be liberating because it leaves us “free to discuss logically what we should save and why, and not just fight an anti-extinction battle that is doomed to failure.”

Above all, we can now consider what conservation efforts will achieve for whole ecosystems, and for biodiversity as a whole, without wasting our time agonising over whether, say, British white-clawed crayfish are natives, or dingoes are a separate species, or whether we are morally entitled to introduce bison to clear the steppe of Siberian larch (a native species, but responsible for covering, and warming, ancient carbon-sequestering permafrost).

Nesbit’s ambitious and entertaining account foresees a dynamic and creative role for conservation, especially in an era of potentially catastrophic climate change. Having freed ourselves of the idea that species belong only in their past ranges, and armed with better information about how ecosystems actually work, it may be time for us to govern the spread of bison and countless other species into new ranges. A brave proposal; but as Nesbit points out, translocation may be the only option for some species.

How wheat is grown, how steel is made

Reading Vaclav Smil’s How the World Really Works for New Scientist, 2 February 2022

By the late Renaissance, Europe’s knowledge of the world had grown beyond the compass of any single intellect. In 1772, more or less the whole of human knowledge could be encompassed within a set of handsome encyclopaedias. A century later, even the grandest encyclopaedias, to fulfil their reality-wrapping remit, had to resort to brief sketches and cursory citations. Today the global infosphere has expanded to the point where misinformation and disinformation hide in plain sight.

No one expects everyone to understand everything. But there are limits. Energy expert Vaclav Smil finds it downright inexcusable, that most people misunderstand the fundamental workings of the modern world. “After all,” he says, “appreciating how wheat is grown or steel is made… are not the same as asking that somebody comprehend femtochemistry.”

Smil believes that public discourse has begun to abandon its hold on reality entirely, and he deplores a culture which rewards disproportionately work that is removed from the material realities of life on earth.

This book is Smil’s effort to rebalance public discourse, reminding readers how food is grown, and the built environment is made and maintained — truths that should be obvious, but which are all to easily forgotten in our current, apocalypse- and utopia-minded times.

The fundamentals of our lives will not change drastically in the coming 20–30 years. Most of our electricity is gener­ated by steam turbines, invented by Charles Parsons in 1884, or by gas turbines, first commercially deployed in 1938. So never mind AI, electric cars, the internet, 5G, or space entrepreneurism (all of which depend for their energy on those antediluvian turbines). The health or otherwise of modern civilization rests, as it has rested for decades, on the continued production of ammonia, steel, concrete, and plastics.

All these currently require fossil fuels for their production, and alternative production methods, where they are available, will take many decades to establish. (It was much easier to displace wood by coal than it is now to displace fossil fuels with renewables, because global energy demand was an order of magnitude lower in 1920 than it was in 2020.)

Given the ungainsayable evidence of climate change, does this mean that our civilisation, so hopelessly dependent on fossil fuels, is doomed?

The simple answer is that we don’t know, and Smil would rather we take our present environmental and economic challenges seriously than fritter away our energy and anxiety on complex socio-economic forecasts that are not worth the femtobytes used to calculate them. After all, such forecasts are likely to be getting worse, not better, because, as Smil says, “more complex models combining the interactions of economic, social, technical, and environmental factors require more assumptions and open the way for greater errors.”

How the World Really Works neither laments the imminent end of the world, nor bloviates enthusiastically over the astonishingly transformative powers of the AI Singularity. Indeed, it gives no quarter to such thinking, be it apocalyptic or techno-utopian.

Smil would rather explain the workings of the actual world. He writes about energy, food, materials, the biosphere, about the perception of risk, and about globalisation. He writes about those sizeable parts of ground reality that the doomsayers and boosterists ignore. It’s grumpy, pugnacious account and, I would argue, intellectually indispensable, as we rattle our way towards this year’s COP conference in Egypt.

In an era of runaway specialization, Smil is an exemplary generalist. “Drilling the deepest possible hole and being an unsurpassed master of a tiny sliver of the sky visible from its bottom has never appealed to me,” he writes. “I have always preferred to scan as far and as wide as my limited capabilities have allowed me to do.”

How the World Really Works delivers fully on the promise of its title. It is hard to formulate any higher praise.

“The working conditions one has to put up with here!”

Watching Peter Fleischmann’s Hard to be a God (1989) for New Scientist, 19 January 2022

The scrabble for dominance in streaming video continues to heat up. As I write this, Paramount has decided to pull season 4 of Star Trek Discovery from Netflix and screen it instead on its own platform, forcing die-hard fans to shell out for yet another streaming subscription. Amazon has canceled one Game of Thrones spin-off to concentrate on another, House of the Dragon, writing off $30,000,000 in the process. And Amazon’s Lord of the Rings prequel, set millennia before the events of The Hobbit, is reputed to be costing $450,000,000 per series — that’s five times as much to produce as Game of Thrones.

All this febrile activity has one unsung benefit for viewers; while the wheels of production slowly turn, channel programmers are being tasked with finding historical material to feed our insatiable demand for epic sci-fi and fantasy. David Lynch’s curious and compelling 1984-vintage Dune is streaming on every major service. And on Amazon Prime, you can (and absolutely should) find Peter Fleischmann’s 1989 Hard to Be A God, a West German-Soviet-French-Swiss co-production based on the best known of the “Noon Universe” novels by Soviet sf writers Arkady and Boris Strugatsky.

In the Noon future (named after the brothers’ sci fi novel Noon: 22nd Century) humankind has evolved beyond money, crime and warfare to achieve an anarchist techno-utopia. Self-appointed “Progressors” cross interstellar space with ease to guide the fate of other, less sophisticated humanoid civilisations.

It’s when earnest, dashing progessors land on these benighted and backward worlds that their troubles begin and the ethical dilemmas start to pile up.

In Hard to be a God Anton, an agent of Earth’s Institute of Experimental History, is sent to spy on the city of Arkanar which is even now falling under the sway of Reba, the kingdom’s reactionary first minister. Palace coups, mass executions and a peasant war drive Anton from his position of professional indifference, first to depression, drunkenness and despair, and finally to a fiery and controversial commitment to Arkanar’s weakling revolution.

So far, so schematic: but the novel has a fairly sizeable sting in its tale, and this is admirably brought to the fore in Fleischmann’s screenplay (co-written with Jean-Claude Carriere, best known for his work with Luis Bunuel).

Yes, progressors like Anton have evolved past their propensity for violence; but in consequence, they have lost the knack of ordinary human sympathy. “The working conditions one has to put up with here!” complains Anton’s handler, fighting with a collapsible chair while, on the surveillance screen behind him, Reba’s inquisition beats a street entertainer nearly to death.

Anton — in an appalled and impassioned performance by the dashing Polish actor Edward Zentara — comes at last to understand his advanced civilisation’s dilemma: “We were able to see everything that was happening in the world,” he tells an Ankaran companion, breaking his own cover as he does so. “We saw all the misery, but couldn’t feel sympathy any more. We had our meals while seeing pictures of starving people in front of us.”

Anton’s terrible experiences in strife-torn Ankara (where every street boasts a dangling corpse) do not go unremarked. Earth’s other progressors, watching Anton from orbit, do their best to overcome their limitations. But the message here is a serious one: virtue is something we have to strive for in our lives, not a state we can attain as some sort of birthright.

Comparable to Lynch’s Dune in its ambition, and far more articulate than that cruelly cut-about effort, Fleischmann’s upbeat but moving Hard to be a God reminds us that cinema in the 1980s set later sci-fi movie makers a very high bar indeed. We can only hope that this year’s TV epics and cinema sequels take as much serious effort over their stories as they are taking over their production design.

Citizen of nowhere

Watching Son of Monarchs for New Scientist, 3 November 2021

“This is you!” says Bob, Mendel’s boss at a genetics laboratory in New York City. He holds the journal out for his young colleague to see: on its cover there’s a close-up of the wing of a monarch butterfly. The cover-line announces the lab’s achievement: they have shown how the evolution and development of butterfly color and iridescence are controlled by a single master regulatory gene.

Bob (William Mapother) sees something is wrong. Softer now: “This is you. Own it.”
But Mendel, Bob’s talented Mexican post-doc (played by Tenoch Huerta, familiar from the Netflix series Narcos: Mexico), is near to tears.

Something has gone badly wrong in Mendel’s life. And he’s no more comfortable back home, in the butterfly forests of Michoacán, than he was in Manhattan. In some ways things are worse. Even at their grandmother’s funeral, his brother Simon (Noé Hernández) won’t give him an inch. At least the lab was friendly.

Bit by bit, through touching flashbacks, some disposable dream sequences and one rather overwrought row, we learn the story: how, when Mendel and Simon were children, a mining accident drowned their parents; how their grandmother took them in, but things were never the same; how Simon went to work for the predatory company responsible for the accident, and has ever since felt judged by his high-flying, science-whizz, citizen-of-nowhere brother.

When Son of Monarchs premiered at this year’s Sundance Film Festival, critics picked up on its themes of borders and belonging, the harm walls do and all the ways nature undermines them. Mendel grew up in a forest alive with clouds of Monarch butterflies. (In the film the area, a national reserve, is threatened by mining; these days, tourism is arguably the bigger threat.) Sarah, Mendel’s New York girlfriend (Alexia Rasmussen; note-perfect but somewhat under-used) is an amateur trapeze artist. The point — that airborn creatures know no frontiers — is clear enough; just in case you missed it, a flashback shows young Mendel and young Simon in happier days, discussing humanity’s airborne future.

In a strongly scripted film, such gestures would have been painfully heavy-handed. Here, though, they’re pretty much all the viewer has to go on in this sometimes painfully indirect film.
The plot does come together, though, through the character of Mendel’s old friend Vicente (a stand-out performance by the relative unknown Gabino Rodríguez). While muddling along like everyone else in the village of Angangueo (the real-life site, in 2010, of some horrific mine-related mudslides), Vicente has been developing peculiar animistic rituals. His unique brand of masked howling seems jolly silly at first glance — just a backwoodsman’s high spirits — but as the film advances, we realise that these rituals are just what Mendel needs.

For a man trapped between worlds, Vicente’s rituals offer a genuine way out: a way to re-engage imaginatively with the living world.

So, yes, Son of Monarchs is, on one level, about identity, about how a cosmopolitan high-flier learns to be a good son of Angangeo. But more than that, it’s about personality: about how Mendel learns to live both as a scientist, and as a man lost among butterflies.

French-Venezuelan filmmaker Alexis Gambis is himself a biologist and founded the Imagine Science Film Festival. While Son of Monarchs is steeped in colour, and full of cinematographer Alejandro Mejía’s mouth-watering (occasionally stomach-churning) macro-photography of butterflies and their pupae, ultimately this is a film, not about the findings of science, but about science as a vocation.

Gambis’s previous feature, The Fly Room (2014) was about the inspiration a 10-year-old girl draws from visits to T H Morgan’s famous (and famously cramped) “Fly Room” drosophila laboratory. Son of Monarchs asks what can be done if inspiration dries up. It is a hopeful film and, on more than the visual level, a beautiful one.

Chemistry off the leash

Reading Sarah Rushton’s The Science of Life and Death in Frankenstein for New Scientist, 27 October 2021

In 1817, in a book entitled Experiments on Life and its Basic Forces, the German natural philosopher Carl August Weinhold explained how he had removed the brain from a living kitten, and then inserted a mixture of zinc and silver into the empty skull. The animal “raised its head, opened its eyes, looked straight ahead with a glazed expression, tried to creep, collapsed several times, got up again, with obvious effort, hobbled about, and then fell down exhausted.”

The following year, Mary Shelley’s Frankenstein captivated a public not at all startled by its themes, but hungry for horripilating thrills and avid for the author’s take on arguably the most pressing scientific issue of the day. What was the nature of this strange zone that had opened up between the worlds of the living and the dead?

Three developments had muddied this once obvious and clear divide: in revolutionary France, the flickers of life exhibited by freshly guillotined heads; in Edinburgh, the black market in fresh (and therefore dissectable) corpses; and on the banks of busy British rivers, attempts (encouraged by the Royal Humane Society) to breathe life into the recently drowned.

Ruston covers this familiar territory well, then goes much further, revealing Mary Shelley’s superb and iron grip on the scientific issues of her day. Frankenstein was written just as life’s material basis was emerging. Properties once considered unique to living things were turning out to be common to all matter, both living and unliving. Ideas about electricity offer a startling example.

For more than a decade, from 1780 to the early 1790s, it had seemed to researchers that animal life was driven by a newly discovered life source, dubbed ‘animal electricity’. This was a notion cooked up by the Bologna-born physician Luigi Galvani to explain a discovery he had made in 1780 with his wife Lucia. They had found that the muscles of dead frogs’ legs twitch when struck by an electrical spark. Galvani concluded that living animals possessed their own kind of electricity. The distinction between ‘animal electricity’ and metallic electricity didn’t hold for long, however. By placing discs of different metals on his tongue, and feeling the jolt, Volta showed that electricity flows between two metals through biological tissue.

Galvani’s nephew, Giovanni Aldini, took these experiments further in spectacular, theatrical events in which corpses of hanged murderers attempted to stand or sit up, opened their eyes, clenched their fists, raised their arms and beat their hands violently against the table.

As Ruston points out, Frankenstein’s anguished description of the moment his Creature awakes “sounds very like the description of Aldini’s attempts to resuscitate 26-year-old George Forster”, hanged for the murder of his wife and child in January 1803.

Frankenstein cleverly clouds the issue of exactly what form of electricity animates the creature’s corpse. Indeed, the book (unlike the films) is much more interested in the Creature’s chemical composition than in its animation by a spark.

There are, Ruston shows, many echoes of Humphry Davy’s 1802 Course of Chemistry in Frankenstein. It’s not for nothing that Frankenstein’s tutor Professor Waldman tells him that chemists “have acquired new and almost unlimited powers”.

An even more intriguing contemporary development was the ongoing debate between the surgeon John Abernethy and his student William Lawrence in the Royal College of Surgeons. Abernethy claimed that electricity was the “vital principle” underpinning the behaviour of organic matter. Nonsense, said Lawrence, who saw in living things a principle of organisation. Lawrence was an early materialist, and his patent atheism horrified many. The Shelleys were friendly with Lawrence, and helped him weather the scandal engulfing him.

The Science of Life and Death is both an excellent introduction and a serious contribution to understanding Frankenstein. Through Ruston’s eyes, we see how the first science fiction novel captured the imagination of its public.

 

 

Dogs (a love story)

Reading Pat Shipman’s Our Oldest Companions: The story of the first dogs for New Scientist, 13 October 2021

Sometimes, when Spanish and other European forces entered lands occupied by the indigenous peoples of South America, they used dogs to massacre the indigenous human population. Occasionally their mastiffs, trained to chase and kill, actually fed upon the bodies of their victims.

The locals’ response was, to say the least, surprising: they fell in love. These beasts were marvellous novelties, loyal and intelligent, and a trade in domesticated dogs spread across a harrowed continent.

What is it about the dog, that makes it so irresistible?

Anthropologist Pat Shipman is out to describe the earliest chapters in our species’ relationship with dogs. From a welter of archaeological and paleo-genetic detail, Shipman has fashioned an unnerving little epic of love and loyalty, hunting and killing.

There was, in Shipman’s account, nothing inevitable, nothing destined, about the relationship that turned the grey wolf into a dog. Yes, early Homo sapiens hunted with early “wolf-dogs” in a symbiotic relationship that let humans borrow a wolf’s superior speed and senses, while allowing wolves to share in a human’s magical ability to kill prey at a distance with spears or arrows. But why, in the pursuit of more food, would humans take in, feed, nurture, and breed another meat-eating animal? Shipman puts it more forcefully: “Who would select such a ferocious and formidable predator as a wolf for an ally and companion?”

To find the answer, says Shipman, forget intentionality. Forget the old tale in which someone captures a baby animal, tames it, raises it, selects a mate for it, and brings up the friendliest babies.

Instead, it was the particular ecology of Europe about 50,000 years ago that drove grey wolves and human interlopers from Mesopotamia into close cooperation, thereby seeing off most large game and Europe’s own indigenous Neanderthals.

This story was explored in Shipman’s 2015 The Invaders. Our Oldest Companions develops that argument to explain why dogs and humans did not co-evolve similar behaviours elsewhere.

Australia provides Shipman with her most striking example. Homo sapiens first arrived in Australia without dogs, because back then (around 35,000 years ago, possibly earlier) there were no such things. (The first undisputed dog remains belong to the Bonn-Oberkassel dog, buried beside humans 14,200 years ago.)

The ancestors of today’s dingoes were brought to Australia by ship only around 3000 years ago, where their behaviour and charisma immediately earned them a central place in Native Australian folklore.

Yet, in a land very different to Europe, less densely populated by large animals and boasting only two large-sized mammalian predators, the Tasmanian tiger and the marsupial lion (both now extinct), there was never any mutual advantage to be had in dingoes and humans working, hunting, feasting and camping together. Consequently dingoes, though they’re eminently tameable, remain undomesticated.

The story of humans and dogs in Asia remains somewhat unclear, and some researchers still argue that the bond between wolf and man was first established here. Shipman, who’s having none of it, points to a crucial piece of non-evidence: if dogs first arose in Asia, then where are the ancient dog burials?

“Deliberate burial,” Shipman writes, “is just about the gold standard in terms of evidence that an animal was domesticated.” There are no such ancient graves in Asia. It’s near Bonn, on the right bank of the Rhine, that the earliest remains of a clearly domesticated dog were discovered in 1914, tucked between two human skeletons, their grave decorated with works of art made of bones and antlers.

Domesticated dogs now comprise more than 300 subspecies, though overbreeding has left hardly any that are capable of carrying out their intended functions of hunting, guarding, or herding.

Shipman passes no comment, but I can’t help but think this a sad and trivial end to a story that began so heroically, among mammoth and tiger and lion.

 

Who’s left in the glen?

Watching Emily Munro’s Living Proof: A climate story for New Scientist, 6 October 2021

Most environmental documentaries concentrate, on the environment. Most films about the climate crisis focus on people who are addressing the crisis.

Assembled and edited by Emily Munro, a curator of the moving image at the National Library of Scotland, Living Proof is different. It’s a film about demobbed soldiers and gamekeepers, architects and miners and American ex-pats. It’s about working people and their employers, about people whose day-to-day actions have contributed to the industrialisation of Scotland, its export of materials and methods (particularly in the field of off-shore oil and gas), and its not coincidental environmental footprint.

Only towards the end of Munro’s film do we meet protestors of any sort. They’re deploring the construction of a nuclear power plant at Torness, 33 miles east of Edinburgh. Even here, Munro is less interested in the protest itself, than in one impassioned, closely argued speech which, in the context of the film, completes an argument begun in Munro first reel (via a public information film from the mid-1940s) about the country’s political economy.

Assembled from propaganda and public information films, promotional videos and industrial reports, Living Proof is an archival history of what Scotland has told itself about itself, and how those stories, ambitions and visions have shaped the landscape, and effected the global environment.

Munro is in thrall to the changing Scottish industrial landscape, from its herring fisheries to its dams, from its slums and derelict mine-heads to the high modernism of its motorways and strip mills. Her vision is compelling and seductive. Living Proof is also — and this is more important — a film which respects its subjects’ changing aspirations. It tells the story of a poor, relatively undeveloped nation waking up to itself and trying to do right by its people.

It will come as no surprise, as Glasgow prepares to host the COP26 global climate conference, to hear that the consequences of those efforts have been anything but an unalloyed good. Powered by offshore oil and gas, home to Polaris nuclear missiles, and a redundancy-haunted grave for a dozen heavy industries (from coal-mining to ship-building to steel manufacture), Scotland is no-one’s idea of a green nation.

As Munro’s film shows, however, the environment was always a central plank of whatever argument campaigners, governments and developers made at the time. The idea that the Scots (and the rest of us) have only now “woken up to the environment” is a pernicious nonsense.

It’s simply that our idea of the environment has evolved.

In the 1940s, the spread of bog water, as the Highlands depopulated, was considered a looming environmental disaster, taking good land out of use. In the 1950s automation promised to pull working people out of poverty, disease and pollution. In the 1960s rapid communications were to serve an industrial culture that would tread ever more lightly over the mine-ravaged earth.

It’s with the advent of nuclear power, and that powerful speech on the beach at Torness, that the chickens come home to roost. That new nuclear plant is only going to employ around 500 people! What will happen to the region then?

This, of course, is where we came in: to a vision of a nation that, if cannot afford its own people, will go to rack and ruin, with (to quote that 1943 information film) “only the old people and a few children left in the glen”.

Living Proof critiques an economic system that, whatever its promises, can cannot help but denude the earth of its resources, and pauperise its people. It’s all the more powerful for being articulated through real things: schools and roads and pharmaceuticals, earth movers and oil rigs, washing machines and gas boilers.

Reasonable aspirations have done unreasonable harm to the planet. That’s the real crisis elucidated by Living Proof. It’s a point too easily lost in all the shouting. And it’s rarely been made so well.

The tools at our disposal

Reading Index, A History of the, by Dennis Duncan, for New Scientist, 15 September 2021

Every once in a while a book comes along to remind us that the internet isn’t new. Authors like Siegfried Zielinski and Jussi Parikka write handsomely about their adventures in “media archaeology”, revealing all kinds of arcane delights: the eighteenth-century electrical tele-writing machine of Joseph Mazzolari; Melvil Dewey’s Decimal System of book classification of 1873.

It’s a charming business, to discover the past in this way, but it does have its risks. It’s all too easy to fall into complacency, congratulating the thinkers of past ages for having caught a whiff, a trace, a spark, of our oh-so-shiny present perfection. Paul Otlet builds a media-agnostic City of Knowledge in Brussels in 1919? Lewis Fry Richardson conceives a mathematical Weather Forecasting Factory in 1922? Well, I never!

So it’s always welcome when an academic writer — in this case London based English lecturer Dennis Duncan — takes the time and trouble to tell this story straight, beginning at the beginning, ending at the end. Index, A History of the is his story of textual search, told through charming portrayals of some of the most sophisticated minds of their era, from monks and scholars shivering among the cloisters of 13th-century Europe to server-farm administrators sweltering behind the glass walls of Silicon Valley.

It’s about the unspoken and always collegiate rivalry between two kinds of search: the subject index (a humanistic exercise, largely un-automatable, requiring close reading, independent knowledge, imagination, and even wit) and the concordance (an eminently automatable listing of words in a text and their locations).

Hugh of St Cher is the father of the concordance: his list of every word in the bible and its location, begun in 1230, was a miracle of miniaturisation, smaller than a modern paperback. It and its successors were useful, too, for clerics who knew their bibles almost by heart.

But the subject index is a superior guide when the content is unfamiliar, and it’s Robert Grosseteste (born in Suffolk around 1175) who we should thank for turning the medieval distinctio (an associative list of concepts, handy for sermon-builders), into something like a modern back-of-book index.

Reaching the present day, we find that with the arrival of digital search, the concordance is once again ascendant (the search function, Ctl-F, whatever you want to call it, is an automated concordance), while the subject index, and its poorly recompensed makers, are struggling to keep up in an age of reflowable screen text. (Sewing embedded active indexes through a digital text is an excellent idea which, exasperatingly, has yet to catch on.)

Running under this story is a deeper debate, between people who want to access their information quickly, and people (especially authors) who want people to read books from beginning to end.

This argument about how to read has been raging literally for millennia, and with good reason. There is clear sense in Socrates’ argument against reading itself, as recorded in Plato’s Phaedrus (370 BCE): “You have invented an elixir not of memory, but of reminding,” his mythical King Thamus complains. Plato knew a thing or two about the psychology of reading, too: people who just look up what they need “are for the most part ignorant,” says Thamus, “and hard to get along with, since they are not wise, but only appear wise.”

Anyone who spends too many hours a day on social media will recognise that portrait — if they have not already come to resemble it.

Duncan’s arbitration of this argument is a wry one. Scholarship, rather than being timeless and immutable, “is shifting and contingent,” he says, and the questions we ask of our texts “have a lot to do with the tools at our disposal.”

One courageous act

Watching A New World Order for New Scientist, 8 September 2021

“For to him that is joined to all the living there is hope,” runs the verse from Ecclesiastes, “for a living dog is better than a dead lion.”

Stefan Ebel plays Thomasz, the film’s “living dog”, a deserter who, more frightened than callous, has learned to look out solely for himself.

In the near future, military robots have turned against their makers. The war seems almost over. Perhaps Thomasz has wriggled and dodged his way to the least settled part of the planet (Daniel Raboldt’s debut feature is handsomely shot in Arctic Finland by co-writer Thorsten Franzen). Equally likely, this is what the whole planet looks like now: trees sweeping in to fill the spaces left by an exterminated humanity.

You might expect the script to make this point clear, but there is no script; rather, there is no dialogue. The machines (wasp-like drones, elephantine tripods, and one magnificent airborne battleship that that would not look out of place in a Marvel movie) target people by listening out for their voices; consequently, not a word can be exchanged between Thomasz and his captor Lilja, played by Siri Nase.

Lilja takes Thomasz prisoner because she needs his brute strength. A day’s walk away from the questionable safety of her log cabin home, there is a burned-out military convoy. Amidst the wreckage and bodies, there is a heavy case — and in the case, there is a tactical nuke. Lilja needs Thomasz’s help in dragging it to where she can detonate it, perhaps bringing down the machines. While Thomasz acts out of fear, Lilja is acting out of despair. She has nothing more to live for. While Thomasz wants to live at any cost, Lilja just wants to die. Both are reduced to using each other. Both will have to learn to trust again.

In 2018, John Krasinski’s A Quiet Place arrived in cinemas — a film in which aliens chase down every sound and slaughter its maker. This cannot have been a happy day for the devoted and mostly unpaid German enthusiasts working on A New World Order. But silent movies are no novelty, and theirs has clearly ploughed its own furrow. The film’s sound design, by Sebastian Tarcan, is especially striking, balancing levels so that even a car’s gear change comes across as an imminent alien threat. (Wonderfully, there’s an acknowledging nod to the BBC’s Tripods series buried in the war machines’ emergency signal.)

Writing good silent film is something of a lost art. It’s much easier for writers to explain their story through dialogue, than to propel it through action. Maybe this is why silent film, done well, is such a powerful experience. There is a scene in this movie where Thomasz realises, not only that he has to do the courageous thing, but that he is at last capable of doing it. Ebel, on his own on a scree-strewn Finnish hillside, plays the moment to perfection.

Somewhere on this independent film’s long and interrupted road to distribution (it began life on Kickstarter in 2016) someone decided “A Living Dog” was too obscure a film title for these godless times — a pity, I think, and not just because “A New World Order”, the title picked for UK distribution, manages to be at once pompous and meaningless.

Ebel’s pitch-perfect performance drips guilt and bad conscience. In order to stay alive, he has learned to crawl about the earth. But Lilja’s example, and his own conscience, will turn dog to lion at last, and in a genre that never tires of presenting us with hyper-capable heroes, it’s refreshing, on this occasion, to follow the forging of one courageous act.

“This stretch-induced feeling of awe activates our brain’s spiritual zones”

Reading Angus Fletcher’s Wonderworks: Literary invention and the science of stories for New Scientist, 1 September 2021

Can science explain art?

Certainly: in 1999 the British neurobiologist Semir Zeki published Inner Vision, an illuminating account of how, through trial and error and intuition, different schools of art have succeeded in mapping the neurological architectures of human vision. (Put crudely, Rembrandt tickles one corner of the brain, Piet Mondrian another.)

Twelve years later, Oliver Sacks contributed to an already crowded music psychology shelf with Musicophilia, a collection of true tales in which neurological injuries and diseases are successfully treated with music.

Angus Fletcher believes the time has come for drama, fiction and literature generally to succumb to neurological explanation. Over the past decade, neuroscientists have been using pulse monitors, eye-trackers, brain scanners “and other gadgets” to look inside our heads as we consume novels, poems, films, and comic books. They must have come up with some insights by now.

Fletcher’s hypothesis is that story is a technology, which he defines as “any human-made thing that helps to solve a problem”.

This technology has evolved, over at least the last 4000 years, to help us negotiate the human condition, by which Fletcher means our awareness of our own mortality, and the creeping sense of futility it engenders. Story is “an invention for overcoming the doubt and the pain of just being us”.

Wonderworks is a scientific history of literature; each of its 25 chapters identifies a narrative “tool” which triggers a different, traceable, evidenced neurological outcome. Each tool comes with a goofy label: here you will encounter Butterfly Immersers and Stress Transformers, Humanity Connectors and Gratitude Multipliers.

Don’t sneer: these tools have been proven “to alleviate depression, reduce anxiety, sharpen intelligence, increase mental energy, kindle creativity, inspire confidence, and enrich our days with myriad other psychological benefits.”

Now, you may well object that, just as area V1 of the visual cortex did not evolve so we could appreciate the paintings of Piet Mondrian, so our capacity for horror and pity didn’t arise just so we could appreciate Shakespeare. So if story is merely “holding a mirror up to nature”, then Fletcher’s long, engrossing book wouldn’t really be saying anything.

As any writer will tell you, of course, a story isn’t merely a mirror. The problem comes when you try and make this perfectly legitimate point using neuroscience.

Too often for comfort, and as the demands of concision exceed all human bounds, the reader will encounter passages like: “This stretch-induced feeling of awe activates our brain’s spiritual zones, enriching our consciousness with the sensation of meanings beyond.”

Hitting sentences like this, I normally shut the book, with some force. I stayed my hand on this occasion because, by the time this horror came to light, two things were apparent. First, Fletcher — a neuroscientist turned story analyst — actually does know his neurobiology. Second, he really does know his literature, making Wonderworks a profound and useful guide to reading for pleasure.

Wonderworks fails as popular science because of the extreme parsimony of Fletcher’s explanations; fixing this problem would, however, have involved composing a multi-part work, and lost him his general audience.

The first person through the door is the one who invariably gets shot. Wonderworks is in many respects a pug-ugly book. But it’s also the first of its kind: an intelligent, engaged, erudite attempt to tackle, neurologically, not just some abstract and simplified “story”, but some the world’s greatest literature, from the Iliad to The Dream of the Red Chamber, from Disney’s Up to the novels of Elena Ferrante.

It is easy to get annoyed with this book. But those who stay calm will reap a rich harvest.