“The working conditions one has to put up with here!”

Watching Peter Fleischmann’s Hard to be a God (1989) for New Scientist, 19 January 2022

The scrabble for dominance in streaming video continues to heat up. As I write this, Paramount has decided to pull season 4 of Star Trek Discovery from Netflix and screen it instead on its own platform, forcing die-hard fans to shell out for yet another streaming subscription. Amazon has canceled one Game of Thrones spin-off to concentrate on another, House of the Dragon, writing off $30,000,000 in the process. And Amazon’s Lord of the Rings prequel, set millennia before the events of The Hobbit, is reputed to be costing $450,000,000 per series — that’s five times as much to produce as Game of Thrones.

All this febrile activity has one unsung benefit for viewers; while the wheels of production slowly turn, channel programmers are being tasked with finding historical material to feed our insatiable demand for epic sci-fi and fantasy. David Lynch’s curious and compelling 1984-vintage Dune is streaming on every major service. And on Amazon Prime, you can (and absolutely should) find Peter Fleischmann’s 1989 Hard to Be A God, a West German-Soviet-French-Swiss co-production based on the best known of the “Noon Universe” novels by Soviet sf writers Arkady and Boris Strugatsky.

In the Noon future (named after the brothers’ sci fi novel Noon: 22nd Century) humankind has evolved beyond money, crime and warfare to achieve an anarchist techno-utopia. Self-appointed “Progressors” cross interstellar space with ease to guide the fate of other, less sophisticated humanoid civilisations.

It’s when earnest, dashing progessors land on these benighted and backward worlds that their troubles begin and the ethical dilemmas start to pile up.

In Hard to be a God Anton, an agent of Earth’s Institute of Experimental History, is sent to spy on the city of Arkanar which is even now falling under the sway of Reba, the kingdom’s reactionary first minister. Palace coups, mass executions and a peasant war drive Anton from his position of professional indifference, first to depression, drunkenness and despair, and finally to a fiery and controversial commitment to Arkanar’s weakling revolution.

So far, so schematic: but the novel has a fairly sizeable sting in its tale, and this is admirably brought to the fore in Fleischmann’s screenplay (co-written with Jean-Claude Carriere, best known for his work with Luis Bunuel).

Yes, progressors like Anton have evolved past their propensity for violence; but in consequence, they have lost the knack of ordinary human sympathy. “The working conditions one has to put up with here!” complains Anton’s handler, fighting with a collapsible chair while, on the surveillance screen behind him, Reba’s inquisition beats a street entertainer nearly to death.

Anton — in an appalled and impassioned performance by the dashing Polish actor Edward Zentara — comes at last to understand his advanced civilisation’s dilemma: “We were able to see everything that was happening in the world,” he tells an Ankaran companion, breaking his own cover as he does so. “We saw all the misery, but couldn’t feel sympathy any more. We had our meals while seeing pictures of starving people in front of us.”

Anton’s terrible experiences in strife-torn Ankara (where every street boasts a dangling corpse) do not go unremarked. Earth’s other progressors, watching Anton from orbit, do their best to overcome their limitations. But the message here is a serious one: virtue is something we have to strive for in our lives, not a state we can attain as some sort of birthright.

Comparable to Lynch’s Dune in its ambition, and far more articulate than that cruelly cut-about effort, Fleischmann’s upbeat but moving Hard to be a God reminds us that cinema in the 1980s set later sci-fi movie makers a very high bar indeed. We can only hope that this year’s TV epics and cinema sequels take as much serious effort over their stories as they are taking over their production design.

Citizen of nowhere

Watching Son of Monarchs for New Scientist, 3 November 2021

“This is you!” says Bob, Mendel’s boss at a genetics laboratory in New York City. He holds the journal out for his young colleague to see: on its cover there’s a close-up of the wing of a monarch butterfly. The cover-line announces the lab’s achievement: they have shown how the evolution and development of butterfly color and iridescence are controlled by a single master regulatory gene.

Bob (William Mapother) sees something is wrong. Softer now: “This is you. Own it.”
But Mendel, Bob’s talented Mexican post-doc (played by Tenoch Huerta, familiar from the Netflix series Narcos: Mexico), is near to tears.

Something has gone badly wrong in Mendel’s life. And he’s no more comfortable back home, in the butterfly forests of Michoacán, than he was in Manhattan. In some ways things are worse. Even at their grandmother’s funeral, his brother Simon (Noé Hernández) won’t give him an inch. At least the lab was friendly.

Bit by bit, through touching flashbacks, some disposable dream sequences and one rather overwrought row, we learn the story: how, when Mendel and Simon were children, a mining accident drowned their parents; how their grandmother took them in, but things were never the same; how Simon went to work for the predatory company responsible for the accident, and has ever since felt judged by his high-flying, science-whizz, citizen-of-nowhere brother.

When Son of Monarchs premiered at this year’s Sundance Film Festival, critics picked up on its themes of borders and belonging, the harm walls do and all the ways nature undermines them. Mendel grew up in a forest alive with clouds of Monarch butterflies. (In the film the area, a national reserve, is threatened by mining; these days, tourism is arguably the bigger threat.) Sarah, Mendel’s New York girlfriend (Alexia Rasmussen; note-perfect but somewhat under-used) is an amateur trapeze artist. The point — that airborn creatures know no frontiers — is clear enough; just in case you missed it, a flashback shows young Mendel and young Simon in happier days, discussing humanity’s airborne future.

In a strongly scripted film, such gestures would have been painfully heavy-handed. Here, though, they’re pretty much all the viewer has to go on in this sometimes painfully indirect film.
The plot does come together, though, through the character of Mendel’s old friend Vicente (a stand-out performance by the relative unknown Gabino Rodríguez). While muddling along like everyone else in the village of Angangueo (the real-life site, in 2010, of some horrific mine-related mudslides), Vicente has been developing peculiar animistic rituals. His unique brand of masked howling seems jolly silly at first glance — just a backwoodsman’s high spirits — but as the film advances, we realise that these rituals are just what Mendel needs.

For a man trapped between worlds, Vicente’s rituals offer a genuine way out: a way to re-engage imaginatively with the living world.

So, yes, Son of Monarchs is, on one level, about identity, about how a cosmopolitan high-flier learns to be a good son of Angangeo. But more than that, it’s about personality: about how Mendel learns to live both as a scientist, and as a man lost among butterflies.

French-Venezuelan filmmaker Alexis Gambis is himself a biologist and founded the Imagine Science Film Festival. While Son of Monarchs is steeped in colour, and full of cinematographer Alejandro Mejía’s mouth-watering (occasionally stomach-churning) macro-photography of butterflies and their pupae, ultimately this is a film, not about the findings of science, but about science as a vocation.

Gambis’s previous feature, The Fly Room (2014) was about the inspiration a 10-year-old girl draws from visits to T H Morgan’s famous (and famously cramped) “Fly Room” drosophila laboratory. Son of Monarchs asks what can be done if inspiration dries up. It is a hopeful film and, on more than the visual level, a beautiful one.

Chemistry off the leash

Reading Sarah Rushton’s The Science of Life and Death in Frankenstein for New Scientist, 27 October 2021

In 1817, in a book entitled Experiments on Life and its Basic Forces, the German natural philosopher Carl August Weinhold explained how he had removed the brain from a living kitten, and then inserted a mixture of zinc and silver into the empty skull. The animal “raised its head, opened its eyes, looked straight ahead with a glazed expression, tried to creep, collapsed several times, got up again, with obvious effort, hobbled about, and then fell down exhausted.”

The following year, Mary Shelley’s Frankenstein captivated a public not at all startled by its themes, but hungry for horripilating thrills and avid for the author’s take on arguably the most pressing scientific issue of the day. What was the nature of this strange zone that had opened up between the worlds of the living and the dead?

Three developments had muddied this once obvious and clear divide: in revolutionary France, the flickers of life exhibited by freshly guillotined heads; in Edinburgh, the black market in fresh (and therefore dissectable) corpses; and on the banks of busy British rivers, attempts (encouraged by the Royal Humane Society) to breathe life into the recently drowned.

Ruston covers this familiar territory well, then goes much further, revealing Mary Shelley’s superb and iron grip on the scientific issues of her day. Frankenstein was written just as life’s material basis was emerging. Properties once considered unique to living things were turning out to be common to all matter, both living and unliving. Ideas about electricity offer a startling example.

For more than a decade, from 1780 to the early 1790s, it had seemed to researchers that animal life was driven by a newly discovered life source, dubbed ‘animal electricity’. This was a notion cooked up by the Bologna-born physician Luigi Galvani to explain a discovery he had made in 1780 with his wife Lucia. They had found that the muscles of dead frogs’ legs twitch when struck by an electrical spark. Galvani concluded that living animals possessed their own kind of electricity. The distinction between ‘animal electricity’ and metallic electricity didn’t hold for long, however. By placing discs of different metals on his tongue, and feeling the jolt, Volta showed that electricity flows between two metals through biological tissue.

Galvani’s nephew, Giovanni Aldini, took these experiments further in spectacular, theatrical events in which corpses of hanged murderers attempted to stand or sit up, opened their eyes, clenched their fists, raised their arms and beat their hands violently against the table.

As Ruston points out, Frankenstein’s anguished description of the moment his Creature awakes “sounds very like the description of Aldini’s attempts to resuscitate 26-year-old George Forster”, hanged for the murder of his wife and child in January 1803.

Frankenstein cleverly clouds the issue of exactly what form of electricity animates the creature’s corpse. Indeed, the book (unlike the films) is much more interested in the Creature’s chemical composition than in its animation by a spark.

There are, Ruston shows, many echoes of Humphry Davy’s 1802 Course of Chemistry in Frankenstein. It’s not for nothing that Frankenstein’s tutor Professor Waldman tells him that chemists “have acquired new and almost unlimited powers”.

An even more intriguing contemporary development was the ongoing debate between the surgeon John Abernethy and his student William Lawrence in the Royal College of Surgeons. Abernethy claimed that electricity was the “vital principle” underpinning the behaviour of organic matter. Nonsense, said Lawrence, who saw in living things a principle of organisation. Lawrence was an early materialist, and his patent atheism horrified many. The Shelleys were friendly with Lawrence, and helped him weather the scandal engulfing him.

The Science of Life and Death is both an excellent introduction and a serious contribution to understanding Frankenstein. Through Ruston’s eyes, we see how the first science fiction novel captured the imagination of its public.

 

 

Dogs (a love story)

Reading Pat Shipman’s Our Oldest Companions: The story of the first dogs for New Scientist, 13 October 2021

Sometimes, when Spanish and other European forces entered lands occupied by the indigenous peoples of South America, they used dogs to massacre the indigenous human population. Occasionally their mastiffs, trained to chase and kill, actually fed upon the bodies of their victims.

The locals’ response was, to say the least, surprising: they fell in love. These beasts were marvellous novelties, loyal and intelligent, and a trade in domesticated dogs spread across a harrowed continent.

What is it about the dog, that makes it so irresistible?

Anthropologist Pat Shipman is out to describe the earliest chapters in our species’ relationship with dogs. From a welter of archaeological and paleo-genetic detail, Shipman has fashioned an unnerving little epic of love and loyalty, hunting and killing.

There was, in Shipman’s account, nothing inevitable, nothing destined, about the relationship that turned the grey wolf into a dog. Yes, early Homo sapiens hunted with early “wolf-dogs” in a symbiotic relationship that let humans borrow a wolf’s superior speed and senses, while allowing wolves to share in a human’s magical ability to kill prey at a distance with spears or arrows. But why, in the pursuit of more food, would humans take in, feed, nurture, and breed another meat-eating animal? Shipman puts it more forcefully: “Who would select such a ferocious and formidable predator as a wolf for an ally and companion?”

To find the answer, says Shipman, forget intentionality. Forget the old tale in which someone captures a baby animal, tames it, raises it, selects a mate for it, and brings up the friendliest babies.

Instead, it was the particular ecology of Europe about 50,000 years ago that drove grey wolves and human interlopers from Mesopotamia into close cooperation, thereby seeing off most large game and Europe’s own indigenous Neanderthals.

This story was explored in Shipman’s 2015 The Invaders. Our Oldest Companions develops that argument to explain why dogs and humans did not co-evolve similar behaviours elsewhere.

Australia provides Shipman with her most striking example. Homo sapiens first arrived in Australia without dogs, because back then (around 35,000 years ago, possibly earlier) there were no such things. (The first undisputed dog remains belong to the Bonn-Oberkassel dog, buried beside humans 14,200 years ago.)

The ancestors of today’s dingoes were brought to Australia by ship only around 3000 years ago, where their behaviour and charisma immediately earned them a central place in Native Australian folklore.

Yet, in a land very different to Europe, less densely populated by large animals and boasting only two large-sized mammalian predators, the Tasmanian tiger and the marsupial lion (both now extinct), there was never any mutual advantage to be had in dingoes and humans working, hunting, feasting and camping together. Consequently dingoes, though they’re eminently tameable, remain undomesticated.

The story of humans and dogs in Asia remains somewhat unclear, and some researchers still argue that the bond between wolf and man was first established here. Shipman, who’s having none of it, points to a crucial piece of non-evidence: if dogs first arose in Asia, then where are the ancient dog burials?

“Deliberate burial,” Shipman writes, “is just about the gold standard in terms of evidence that an animal was domesticated.” There are no such ancient graves in Asia. It’s near Bonn, on the right bank of the Rhine, that the earliest remains of a clearly domesticated dog were discovered in 1914, tucked between two human skeletons, their grave decorated with works of art made of bones and antlers.

Domesticated dogs now comprise more than 300 subspecies, though overbreeding has left hardly any that are capable of carrying out their intended functions of hunting, guarding, or herding.

Shipman passes no comment, but I can’t help but think this a sad and trivial end to a story that began so heroically, among mammoth and tiger and lion.

 

Who’s left in the glen?

Watching Emily Munro’s Living Proof: A climate story for New Scientist, 6 October 2021

Most environmental documentaries concentrate, on the environment. Most films about the climate crisis focus on people who are addressing the crisis.

Assembled and edited by Emily Munro, a curator of the moving image at the National Library of Scotland, Living Proof is different. It’s a film about demobbed soldiers and gamekeepers, architects and miners and American ex-pats. It’s about working people and their employers, about people whose day-to-day actions have contributed to the industrialisation of Scotland, its export of materials and methods (particularly in the field of off-shore oil and gas), and its not coincidental environmental footprint.

Only towards the end of Munro’s film do we meet protestors of any sort. They’re deploring the construction of a nuclear power plant at Torness, 33 miles east of Edinburgh. Even here, Munro is less interested in the protest itself, than in one impassioned, closely argued speech which, in the context of the film, completes an argument begun in Munro first reel (via a public information film from the mid-1940s) about the country’s political economy.

Assembled from propaganda and public information films, promotional videos and industrial reports, Living Proof is an archival history of what Scotland has told itself about itself, and how those stories, ambitions and visions have shaped the landscape, and effected the global environment.

Munro is in thrall to the changing Scottish industrial landscape, from its herring fisheries to its dams, from its slums and derelict mine-heads to the high modernism of its motorways and strip mills. Her vision is compelling and seductive. Living Proof is also — and this is more important — a film which respects its subjects’ changing aspirations. It tells the story of a poor, relatively undeveloped nation waking up to itself and trying to do right by its people.

It will come as no surprise, as Glasgow prepares to host the COP26 global climate conference, to hear that the consequences of those efforts have been anything but an unalloyed good. Powered by offshore oil and gas, home to Polaris nuclear missiles, and a redundancy-haunted grave for a dozen heavy industries (from coal-mining to ship-building to steel manufacture), Scotland is no-one’s idea of a green nation.

As Munro’s film shows, however, the environment was always a central plank of whatever argument campaigners, governments and developers made at the time. The idea that the Scots (and the rest of us) have only now “woken up to the environment” is a pernicious nonsense.

It’s simply that our idea of the environment has evolved.

In the 1940s, the spread of bog water, as the Highlands depopulated, was considered a looming environmental disaster, taking good land out of use. In the 1950s automation promised to pull working people out of poverty, disease and pollution. In the 1960s rapid communications were to serve an industrial culture that would tread ever more lightly over the mine-ravaged earth.

It’s with the advent of nuclear power, and that powerful speech on the beach at Torness, that the chickens come home to roost. That new nuclear plant is only going to employ around 500 people! What will happen to the region then?

This, of course, is where we came in: to a vision of a nation that, if cannot afford its own people, will go to rack and ruin, with (to quote that 1943 information film) “only the old people and a few children left in the glen”.

Living Proof critiques an economic system that, whatever its promises, can cannot help but denude the earth of its resources, and pauperise its people. It’s all the more powerful for being articulated through real things: schools and roads and pharmaceuticals, earth movers and oil rigs, washing machines and gas boilers.

Reasonable aspirations have done unreasonable harm to the planet. That’s the real crisis elucidated by Living Proof. It’s a point too easily lost in all the shouting. And it’s rarely been made so well.

The tools at our disposal

Reading Index, A History of the, by Dennis Duncan, for New Scientist, 15 September 2021

Every once in a while a book comes along to remind us that the internet isn’t new. Authors like Siegfried Zielinski and Jussi Parikka write handsomely about their adventures in “media archaeology”, revealing all kinds of arcane delights: the eighteenth-century electrical tele-writing machine of Joseph Mazzolari; Melvil Dewey’s Decimal System of book classification of 1873.

It’s a charming business, to discover the past in this way, but it does have its risks. It’s all too easy to fall into complacency, congratulating the thinkers of past ages for having caught a whiff, a trace, a spark, of our oh-so-shiny present perfection. Paul Otlet builds a media-agnostic City of Knowledge in Brussels in 1919? Lewis Fry Richardson conceives a mathematical Weather Forecasting Factory in 1922? Well, I never!

So it’s always welcome when an academic writer — in this case London based English lecturer Dennis Duncan — takes the time and trouble to tell this story straight, beginning at the beginning, ending at the end. Index, A History of the is his story of textual search, told through charming portrayals of some of the most sophisticated minds of their era, from monks and scholars shivering among the cloisters of 13th-century Europe to server-farm administrators sweltering behind the glass walls of Silicon Valley.

It’s about the unspoken and always collegiate rivalry between two kinds of search: the subject index (a humanistic exercise, largely un-automatable, requiring close reading, independent knowledge, imagination, and even wit) and the concordance (an eminently automatable listing of words in a text and their locations).

Hugh of St Cher is the father of the concordance: his list of every word in the bible and its location, begun in 1230, was a miracle of miniaturisation, smaller than a modern paperback. It and its successors were useful, too, for clerics who knew their bibles almost by heart.

But the subject index is a superior guide when the content is unfamiliar, and it’s Robert Grosseteste (born in Suffolk around 1175) who we should thank for turning the medieval distinctio (an associative list of concepts, handy for sermon-builders), into something like a modern back-of-book index.

Reaching the present day, we find that with the arrival of digital search, the concordance is once again ascendant (the search function, Ctl-F, whatever you want to call it, is an automated concordance), while the subject index, and its poorly recompensed makers, are struggling to keep up in an age of reflowable screen text. (Sewing embedded active indexes through a digital text is an excellent idea which, exasperatingly, has yet to catch on.)

Running under this story is a deeper debate, between people who want to access their information quickly, and people (especially authors) who want people to read books from beginning to end.

This argument about how to read has been raging literally for millennia, and with good reason. There is clear sense in Socrates’ argument against reading itself, as recorded in Plato’s Phaedrus (370 BCE): “You have invented an elixir not of memory, but of reminding,” his mythical King Thamus complains. Plato knew a thing or two about the psychology of reading, too: people who just look up what they need “are for the most part ignorant,” says Thamus, “and hard to get along with, since they are not wise, but only appear wise.”

Anyone who spends too many hours a day on social media will recognise that portrait — if they have not already come to resemble it.

Duncan’s arbitration of this argument is a wry one. Scholarship, rather than being timeless and immutable, “is shifting and contingent,” he says, and the questions we ask of our texts “have a lot to do with the tools at our disposal.”

One courageous act

Watching A New World Order for New Scientist, 8 September 2021

“For to him that is joined to all the living there is hope,” runs the verse from Ecclesiastes, “for a living dog is better than a dead lion.”

Stefan Ebel plays Thomasz, the film’s “living dog”, a deserter who, more frightened than callous, has learned to look out solely for himself.

In the near future, military robots have turned against their makers. The war seems almost over. Perhaps Thomasz has wriggled and dodged his way to the least settled part of the planet (Daniel Raboldt’s debut feature is handsomely shot in Arctic Finland by co-writer Thorsten Franzen). Equally likely, this is what the whole planet looks like now: trees sweeping in to fill the spaces left by an exterminated humanity.

You might expect the script to make this point clear, but there is no script; rather, there is no dialogue. The machines (wasp-like drones, elephantine tripods, and one magnificent airborne battleship that that would not look out of place in a Marvel movie) target people by listening out for their voices; consequently, not a word can be exchanged between Thomasz and his captor Lilja, played by Siri Nase.

Lilja takes Thomasz prisoner because she needs his brute strength. A day’s walk away from the questionable safety of her log cabin home, there is a burned-out military convoy. Amidst the wreckage and bodies, there is a heavy case — and in the case, there is a tactical nuke. Lilja needs Thomasz’s help in dragging it to where she can detonate it, perhaps bringing down the machines. While Thomasz acts out of fear, Lilja is acting out of despair. She has nothing more to live for. While Thomasz wants to live at any cost, Lilja just wants to die. Both are reduced to using each other. Both will have to learn to trust again.

In 2018, John Krasinski’s A Quiet Place arrived in cinemas — a film in which aliens chase down every sound and slaughter its maker. This cannot have been a happy day for the devoted and mostly unpaid German enthusiasts working on A New World Order. But silent movies are no novelty, and theirs has clearly ploughed its own furrow. The film’s sound design, by Sebastian Tarcan, is especially striking, balancing levels so that even a car’s gear change comes across as an imminent alien threat. (Wonderfully, there’s an acknowledging nod to the BBC’s Tripods series buried in the war machines’ emergency signal.)

Writing good silent film is something of a lost art. It’s much easier for writers to explain their story through dialogue, than to propel it through action. Maybe this is why silent film, done well, is such a powerful experience. There is a scene in this movie where Thomasz realises, not only that he has to do the courageous thing, but that he is at last capable of doing it. Ebel, on his own on a scree-strewn Finnish hillside, plays the moment to perfection.

Somewhere on this independent film’s long and interrupted road to distribution (it began life on Kickstarter in 2016) someone decided “A Living Dog” was too obscure a film title for these godless times — a pity, I think, and not just because “A New World Order”, the title picked for UK distribution, manages to be at once pompous and meaningless.

Ebel’s pitch-perfect performance drips guilt and bad conscience. In order to stay alive, he has learned to crawl about the earth. But Lilja’s example, and his own conscience, will turn dog to lion at last, and in a genre that never tires of presenting us with hyper-capable heroes, it’s refreshing, on this occasion, to follow the forging of one courageous act.

“This stretch-induced feeling of awe activates our brain’s spiritual zones”

Reading Angus Fletcher’s Wonderworks: Literary invention and the science of stories for New Scientist, 1 September 2021

Can science explain art?

Certainly: in 1999 the British neurobiologist Semir Zeki published Inner Vision, an illuminating account of how, through trial and error and intuition, different schools of art have succeeded in mapping the neurological architectures of human vision. (Put crudely, Rembrandt tickles one corner of the brain, Piet Mondrian another.)

Twelve years later, Oliver Sacks contributed to an already crowded music psychology shelf with Musicophilia, a collection of true tales in which neurological injuries and diseases are successfully treated with music.

Angus Fletcher believes the time has come for drama, fiction and literature generally to succumb to neurological explanation. Over the past decade, neuroscientists have been using pulse monitors, eye-trackers, brain scanners “and other gadgets” to look inside our heads as we consume novels, poems, films, and comic books. They must have come up with some insights by now.

Fletcher’s hypothesis is that story is a technology, which he defines as “any human-made thing that helps to solve a problem”.

This technology has evolved, over at least the last 4000 years, to help us negotiate the human condition, by which Fletcher means our awareness of our own mortality, and the creeping sense of futility it engenders. Story is “an invention for overcoming the doubt and the pain of just being us”.

Wonderworks is a scientific history of literature; each of its 25 chapters identifies a narrative “tool” which triggers a different, traceable, evidenced neurological outcome. Each tool comes with a goofy label: here you will encounter Butterfly Immersers and Stress Transformers, Humanity Connectors and Gratitude Multipliers.

Don’t sneer: these tools have been proven “to alleviate depression, reduce anxiety, sharpen intelligence, increase mental energy, kindle creativity, inspire confidence, and enrich our days with myriad other psychological benefits.”

Now, you may well object that, just as area V1 of the visual cortex did not evolve so we could appreciate the paintings of Piet Mondrian, so our capacity for horror and pity didn’t arise just so we could appreciate Shakespeare. So if story is merely “holding a mirror up to nature”, then Fletcher’s long, engrossing book wouldn’t really be saying anything.

As any writer will tell you, of course, a story isn’t merely a mirror. The problem comes when you try and make this perfectly legitimate point using neuroscience.

Too often for comfort, and as the demands of concision exceed all human bounds, the reader will encounter passages like: “This stretch-induced feeling of awe activates our brain’s spiritual zones, enriching our consciousness with the sensation of meanings beyond.”

Hitting sentences like this, I normally shut the book, with some force. I stayed my hand on this occasion because, by the time this horror came to light, two things were apparent. First, Fletcher — a neuroscientist turned story analyst — actually does know his neurobiology. Second, he really does know his literature, making Wonderworks a profound and useful guide to reading for pleasure.

Wonderworks fails as popular science because of the extreme parsimony of Fletcher’s explanations; fixing this problem would, however, have involved composing a multi-part work, and lost him his general audience.

The first person through the door is the one who invariably gets shot. Wonderworks is in many respects a pug-ugly book. But it’s also the first of its kind: an intelligent, engaged, erudite attempt to tackle, neurologically, not just some abstract and simplified “story”, but some the world’s greatest literature, from the Iliad to The Dream of the Red Chamber, from Disney’s Up to the novels of Elena Ferrante.

It is easy to get annoyed with this book. But those who stay calm will reap a rich harvest.

The Art of Conjecturing

Reading Katy Börner’s Atlas of Forecasts: Modeling and mapping desirable futures for New Scientist, 18 August 2021

My leafy, fairly affluent corner of south London has a traffic congestion problem, and to solve it, there’s a plan to close certain roads. You can imagine the furore: the trunk of every kerbside tree sports a protest sign. How can shutting off roads improve traffic flows?

The German mathematician Dietrich Braess answered this one back in 1968, with a graph that kept track of travel times and densities for each road link, and distinguished between flows that are optimal for all cars, and flows optimised for each individual car.

On a Paradox of Traffic Planning is a fine example of how a mathematical model predicts and resolves a real-world problem.

This and over 1,300 other models, maps and forecasts feature in the references to Katy Börner’s latest atlas, which is the third to be derived from Indiana University’s traveling exhibit Places & Spaces: Mapping Science.

Atlas of Science: Visualizing What We Know (2010) revealed the power of maps in science; Atlas of Knowledge: Anyone Can Map (2015), focused on visualisation. In her third and final foray, Börner is out to show how models, maps and forecasts inform decision-making in education, science, technology, and policymaking. It’s a well-structured, heavyweight argument, supported by descriptions of over 300 model applications.

Some entries, like Bernard H. Porter’s Map of Physics of 1939, earn their place thanks purely to their beauty and for the insights they offer. Mostly, though, Börner chooses models that were applied in practice and made a positive difference.

Her historical range is impressive. We begin at equations (did you know Newton’s law of universal gravitation has been applied to human migration patterns and international trade?) and move through the centuries, tipping a wink to Jacob Bernoulli’s “The Art of Conjecturing” of 1713 (which introduced probability theory) and James Clerk Maxwell’s 1868 paper “On Governors” (an early gesture at cybernetics) until we arrive at our current era of massive computation and ever-more complex model building.

It’s here that interesting questions start to surface. To forecast the behaviour of complex systems, especially those which contain a human component, many current researchers reach for something called “agent-based modeling” (ABM) in which discrete autonomous agents interact with each other and with their common (digitally modelled) environment.

Heady stuff, no doubt. But, says Börner, “ABMs in general have very few analytical tools by which they can be studied, and often no backward sensitivity analysis can be performed because of the large number of parameters and dynamical rules involved.”

In other words, an ABM model offers the researcher an exquisitely detailed forecast, but no clear way of knowing why the model has drawn the conclusions it has — a risky state of affairs, given that all its data is ultimately provided by eccentric, foible-ridden human beings.

Börner’s sumptuous, detailed book tackles issues of error and bias head-on, but she left me tugging at a still bigger problem, represented by those irate protest signs smothering my neighbourhood.

If, over 50 years since the maths was published, reasonably wealthy, mostly well-educated people in comfortable surroundings have remained ignorant of how traffic flows work, what are the chances that the rest of us, industrious and preoccupied as we are, will ever really understand, or trust, all the many other models which increasingly dictate our civic life?

Borner argues that modelling data can counteract misinformation, tribalism, authoritarianism, demonization, and magical thinking.

I can’t for the life of me see how. Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” What happens when a model reaches such complexity, only an expert can really understand it, or when even the expert can’t be entirely sure why the forecast is saying what it’s saying?

We have enough difficulty understanding climate forecasts, let alone explaining them. To apply these technologies to the civic realm begs a host of problems that are nothing to do with the technology, and everything to do with whether anyone will be listening.

Eagle-eyed eagles and blind, breathless fish

Secret Worlds: The extraordinary senses of animals by Martin Stevens, reviewed for New Scientist, 21 July 2021

Echo-locating bats use ultrasound to map their lightless surroundings. The information they gather is fine-grained — they can tell the difference between the wing cases and bodies of a beetle, and the scales of a moth’s wings. The extremely high frequency of ultrasound — far beyond our own ability to hear — generates clearer, less “blurry” sonic images. And we should be jolly glad bats use it, and these creatures are seriously noisy. A single bat, out for lunch, screams at around 140 decibels. Someone shouting a metre away generates only 90.

Since 2013, when his textbook Sensory Ecology, Behaviour, and Evolution was published, Martin Stevens, a professor at Exeter University in the UK, has had it in mind to write a popular version — a book that, while paying its dues to the extraordinary sensory abilities of animals, also has something to say about the evolution and plasticity of the senses, and above all the cost of acquiring them.

“Rather than seeing countless species all around us, each with every single one of their sense being a pinnacle of what is possible,” he writes, “we instead observe that evolution and development has honed those senses that the animal needs most, and scaled back on the others.” For every eagle-eyed, erm, eagle, there is a blind fish.

Stevens presents startling data about the expense involved in sensing the world. A full tenth of the energy used by a blowfly (Calliphora vicina) at rest is used up maintaining its photoreceptors and associated nerve cells.

Stevens also highlights some remarkable cost-saving strategies. The ogre-faced spider from Australia (Deinopsis subrufa) has such large, sensitive and expensive-to-maintain eyes, it breaks down photoreceptors and membranes during the day, and regenerates them at night in order to hunt.

Senses are too expensive to stick around when they’re not needed; so they disappear and reappear over evolutionary time. Their genetic mechanisms are surprisingly parsimonious. The same genetic pathways crop up again and again, in quite unrelated species. The same, or similar mutations have occurred in the Prestin gene in both dolphins bats, unrelated species that both echolocate: “not surprising,” Stevens observes, “if evolution has limited genetic material to act on in the first place”.

Stevens boils his encyclopedic knowledge down to three animals per chapter, and each chapter focuses on a different sense. This rather mechanistic approach serves him surprisingly well; this is a field full of stories startling enough not to need much window-dressing. While Stevens’s main point is nature’s parsimony, it’s those wonderful extremes that will stick longest in the mind of the casual reader.

There are many examples of familiar senses brought to a rare peak. For example, the whiskers of a harbour seal (Phoca vitulina) help it find a buried flatfish by nothing more than the water flow created by the fish’s breathing.

More arresting still are the chapters devoted to senses wholly unfamiliar to us. Using their infra-red thermal receptors, vampire bats pick out particular blood vessels to bite into. Huge numbers of marine species detect minute amounts of electricity, allowing them to hunt, elude predators, and even to attract mates.

As for the magnetic sense, Stevens reckons “it is no exaggeration to say that understanding how [it] works has been one of the great mysteries in biology.”

There are two major competing theories to explain the magnetic senses, one relating to the presence of crystals in the body that react to magnetic fields, the other to light-dependent chemical processes occurring in the eyes in response to magnetic information. Trust the robin to complicate the picture still further; it seems to boast both systems, one for use in daylight and one for use in the dark!

And what of those satellite images of cows and deer that show herds lining themselves up along lines of magnetic force, their heads invariably pointing to magnetic north?

Some science writers are, if anything, over-keen to entertain. Stevens, by contrast, is the real deal: the unassuming keeper of a cabinet of true wonders.