Dogs (a love story)

Reading Pat Shipman’s Our Oldest Companions: The story of the first dogs for New Scientist, 13 October 2021

Sometimes, when Spanish and other European forces entered lands occupied by the indigenous peoples of South America, they used dogs to massacre the indigenous human population. Occasionally their mastiffs, trained to chase and kill, actually fed upon the bodies of their victims.

The locals’ response was, to say the least, surprising: they fell in love. These beasts were marvellous novelties, loyal and intelligent, and a trade in domesticated dogs spread across a harrowed continent.

What is it about the dog, that makes it so irresistible?

Anthropologist Pat Shipman is out to describe the earliest chapters in our species’ relationship with dogs. From a welter of archaeological and paleo-genetic detail, Shipman has fashioned an unnerving little epic of love and loyalty, hunting and killing.

There was, in Shipman’s account, nothing inevitable, nothing destined, about the relationship that turned the grey wolf into a dog. Yes, early Homo sapiens hunted with early “wolf-dogs” in a symbiotic relationship that let humans borrow a wolf’s superior speed and senses, while allowing wolves to share in a human’s magical ability to kill prey at a distance with spears or arrows. But why, in the pursuit of more food, would humans take in, feed, nurture, and breed another meat-eating animal? Shipman puts it more forcefully: “Who would select such a ferocious and formidable predator as a wolf for an ally and companion?”

To find the answer, says Shipman, forget intentionality. Forget the old tale in which someone captures a baby animal, tames it, raises it, selects a mate for it, and brings up the friendliest babies.

Instead, it was the particular ecology of Europe about 50,000 years ago that drove grey wolves and human interlopers from Mesopotamia into close cooperation, thereby seeing off most large game and Europe’s own indigenous Neanderthals.

This story was explored in Shipman’s 2015 The Invaders. Our Oldest Companions develops that argument to explain why dogs and humans did not co-evolve similar behaviours elsewhere.

Australia provides Shipman with her most striking example. Homo sapiens first arrived in Australia without dogs, because back then (around 35,000 years ago, possibly earlier) there were no such things. (The first undisputed dog remains belong to the Bonn-Oberkassel dog, buried beside humans 14,200 years ago.)

The ancestors of today’s dingoes were brought to Australia by ship only around 3000 years ago, where their behaviour and charisma immediately earned them a central place in Native Australian folklore.

Yet, in a land very different to Europe, less densely populated by large animals and boasting only two large-sized mammalian predators, the Tasmanian tiger and the marsupial lion (both now extinct), there was never any mutual advantage to be had in dingoes and humans working, hunting, feasting and camping together. Consequently dingoes, though they’re eminently tameable, remain undomesticated.

The story of humans and dogs in Asia remains somewhat unclear, and some researchers still argue that the bond between wolf and man was first established here. Shipman, who’s having none of it, points to a crucial piece of non-evidence: if dogs first arose in Asia, then where are the ancient dog burials?

“Deliberate burial,” Shipman writes, “is just about the gold standard in terms of evidence that an animal was domesticated.” There are no such ancient graves in Asia. It’s near Bonn, on the right bank of the Rhine, that the earliest remains of a clearly domesticated dog were discovered in 1914, tucked between two human skeletons, their grave decorated with works of art made of bones and antlers.

Domesticated dogs now comprise more than 300 subspecies, though overbreeding has left hardly any that are capable of carrying out their intended functions of hunting, guarding, or herding.

Shipman passes no comment, but I can’t help but think this a sad and trivial end to a story that began so heroically, among mammoth and tiger and lion.

 

Sir John Schorne’s Devil

Chasing an old adversary through a world containers made; salvaged from Arc magazine

1

It’s Christmas 2006 and I’m walking beside the creek in Dubai. The pavements and central reservations teem with tourists and Russian prostitutes in roughly equal numbers. Tied up six deep at the quay, dhows are being packed to the gunwales with second-hand Toyota Hiluxes. Piles of tyres, plucked off wrecked and defunct cars, are being hauled by hand into the rusted holds of boats bound for retreading factories in Navlakhi and Karachi. From China, mattresses. Piles of flashlights. Chairs, tables, oxytetracycline injections. Chilean softwoods. Foam mattresses. Drums of sorbitol from Mumbai.

Container shipping serves the great ports of the earth, but there are few enough of
them, and none in north Africa. With no deep harbours, the region depends on little ships like these. The scene fills me with an intense (though false) nostalgia. It’s an echo of the way shipping used to be, as recorded in the writings of my literary heroes. Joseph Conrad, James Hanley, Malcolm Lowry…

Patrick Hamilton: at the end of The Midnight Bell, Bob, defrauded and betrayed, stares across the Thames. The river runs beneath him, “flowing out to the sea…

“The sea! The sea! What of the sea? “The sea!

“The solution – salvation! The sea! Why not? He would go back, like the great river, to the sea!”

This is not suicide. This is a career move.

“He had wasted enough time. He would go down to the docks now, and see what was doing. Now! Now! Now!”

Now, of course, is too late. It used to take days to unload a ship and a ship’s crewmen could spend most of this time resting and drinking and whoring themselves silly on shore. Such capers are relegated to the storybooks now. These days, time in port is measured in hours; junior officers pull twelve-, even eighteen-hour shifts, and regular seamen like Bob (these days a Bangladeshi or Malay) are rarely allowed off the ship. These days, a crew works itself to exhaustion in harbour and spends the voyage recuperating.

Since April 1956 the shipping container has been ushering in a more ordered world. No stench, no sweat. No broken limbs. No sores, no sprains. No ham-boned supermen rolling drunk on Friday night on wages their children never see.

This new, clean, modular world arrives in boxes. From port to port, big, square-built ships carry ever bigger quantities of it about the earth. Great cranes lift it and deposit it on truck beds and railway cars. It is borne inland to every town, every settlement. Ships were once shackled, constipated, squeezing out their goods for weeks on end on the backs of men made beasts. Today they evacuate themselves in hours.

The shipping container was developed by Malcolm McLean, an American entrepreneur who realised his trucking business was being undercut by domestic shipping. After the war, you could buy a war-surplus cargo ship for next to nothing. “Jumboize” it by adding a German-made midsection, and you had a craft capable of moving goods more cheaply than road or rail. The only drawback, before McLaren, was trying to get your goods on and off the vessel. That consumed time and labour enough to represent half your costs – even if you were carting products halfway round the world.
Standardised containers, and the machinery developed to juggle them, can now disgorge vessels a fifth of a mile long in hours.

Two-hundred-foot-high cranes reach across ships broader than the Panama Canal. Some of the more powerful commercial computer applications on the planet ensure that goods are stacked in the right order, in the right containers, in the right section of the right hold of the right ship plying the right route to ensure that these goods are delivered at the right place at the right time. The drive to efficiency has squeezed schedules to the point where Ford and Walmart and Toyota are wholly dependent on timely deliveries for their operation. If their boxes are more than a few days late, they go into hibernation.

Logistics used to be a military term. Now it’s a core function of business. By stripping the fat and inefficiency out of its resource chain, developed Western commerce has placed itself on a permanent war footing. And the war goes well. The variety of goods the First World consumes has increased fourfold, while expanding markets have brought economic and social benefits, however unevenly, to the poorest people on the planet.

Like the fatuous “War on Terror”, however, this is an open-ended conflict: we will never know when we have won it. At what point does an industry’s athleticism start to undermine its health? At what point does a strict diet bloom into galloping anorexia?

2

Construction in the Persian Gulf stops
altogether in summer, when the skies are white and people (if they’re outside at all, which is seldom) huddle in pitiful scraps of shade. Even the more temperate months prove too much for some. The furnace light, the dust, the scale of things.

July 2009. A fence, not much higher than a man and topped with razor wire, runs rifle straight beside the road all the way to a granular horizon. Behind the fence stretch mile after mile of prefabricated houses. The narrow lanes between them are slung with clothes lines. Construction workers sit knocking pallets together. Here and there mounds of graded gravel rise in parody of the great ranges to the south. The road has begun to disintegrate, its surface crazed and sunken under the weight of so many trucks bound for Dubai with loads of boulders and gravel. The World and The Palm began life here, as the insides of mountains.

I come to a roundabout. There is a fountain at its centre: a dry cement tower clad in blue swimming-pool tile, sterile as a bathroom fitting. There isn’t a hint of what places the roundabout might one day serve. No buildings. No traffic. No signs. Just a tarmac sunburst in the sand, its exits blurred and feathered by encroaching dust. On the horizon there are whole city blocks that aren’t even on a map yet. New
developments, bankrolled by the Saudis. University cities thrown up at miraculous speed by Bangladeshi guest workers. Fawn crenellations hover inches off the horizon on a carpet of illusory blue.

It’s late afternoon by the time I find my way back to the city. My hotel room in Dubai has hyperactive air conditioning, a view of the Emirates Towers, and Miami Vice reruns on an endless loop.

In an episode entitled “Tale of the Goat” nobody seems to know whether to take the plot seriously or not. One minute Crocket is joking about voodoo, the next he’s taking advice on Haitian religious practices from Pepe the janitor.

He’s right to be worried. Papa Legba, a Haitian voodoo priest, has faked his own death, been buried, disinterred, and shipped in his coffin to Miami, all to take revenge on a business rival. Clarence Williams III’s portrayal of Papa – brain damaged, half-paralysed, sickeningly malevolent – kicks a hole in the TV big enough for the nightmares to slip out.

People do this kind of thing for real. In 1962 the Haitian petty speculator Clarvius Narcisse was poisoned, buried alive, dug up and drugged into believing that he was a zombie.

Notoriously, shipping containers have cleaned up and expanded even the business of self-burial. Vanishing into a box has never been simpler. According to figures from the United Nations, in 2007 slave traders made more money than Google, Nike and Starbucks combined, their global logistics smoothed by the advent of container shipping.

Incarceration doesn’t even have to be uncomfortable. Five weeks after the September 11 attacks, a dockworker in the Italian container port of Gioia Tauro discovered a stowaway inside a container appointed for a long voyage. There was a bed, a heater, a toilet and a satellite phone. The box, bound for Chicago via Rotterdam and Canada, was chartered by Maersk Sealand’s Egyptian office and loaded in Port Said onto a German-owned charter ship flying an Antiguan flag. Where the stowaway was bound is anyone’s guess. Incredibly, he was granted bail and vanished, abandoning his cellphone, his laptop, airport security passes and an airline mechanic’s certificate valid for Kennedy and O’Hare.

It’s an old nightmare, perhaps the oldest: waking up inside a sealed coffin. The strange voyage of “Container Bob” epitomises its flipside: fear of what leaps out at you when you prise the lid open. This too is an old anxiety; so old, the words that express it no longer line up properly. Pandora’s box was actually a jar (a ritual container for the dead, as it happens, so the frisson survives the mistranslation.) And the first jack-in-the-box hid in a boot. The English prelate Sir John Schorne was supposed to have cast the devil into a boot some time in the 1200s, saving the village of North Marston in Buckinghamshire from perdition.

Since then the “diable en boîte” has been springing out at us at the least expected moments, maturing from a crank-operated children’s toy in the 1500s to a pyromanic McGuffin in director Robert Aldrich’s nuclear noir Kiss Me Deadly, to a headline grabbing red herring at international security conferences. (The “suitcase nuke” is still a myth, though one creeping uncomfortably close to reality: backpack nukes have been around since the 1970s.)

The shipping container is, according to security analysts, the old devil’s current mode of transportation. Why would the world’s whirling axes of evil go to all the trouble of developing an intercontinental ballistic missile when a container can do the same job, more accurately and over a greater distance, for under $5000?
Around 150 companies in the US alone – from tiny start-ups to giants like IBM – are using technical fixes to plug Homeland Security’s very real fear of a container delivered chemical, biological or nuclear attack on the US mainland. With more than two million steel and aluminium boxes moving over the earth at any one time, the scale of the work is Herculean: to fit tamper proof sensors providing real-time geolocation data on every can on earth. Meanwhile America’s megaport Los Angeles/Long Beach unloads its containers in long parallel rows, like trains. An imaging machine mounted on tall steel legs trundles over them. The machines are popular, but they’re not without flaws. Concealed nuclear bombs are not the only sources of radiation on the dockside. There are ceramic tiles. There are bananas. Caesium and cobalt are famously hard to distinguish from kitty litter. And the more sensitive the detectors, the more false positives the port authority has to handle.

In any open-ended conflict, the enemy we most have to fear is fear itself: our crippling overreaction to false positives. It is more than possible, in a system lean enough to admit no delay, that the entire world economy can and will hoax itself to a standstill.

3

It’s 2011 and I’m making a third and final visit to Dubai, a city the old boys remember as a ramshackle fort overlooking a malarial creek. The city’s “miracle” is more or less complete. The transformation looks sudden, but it’s been forty years in the making. In the 1970s, when towns like Abu Dhabi were allowing their harbours to silt up, confident that all the wealth they could ever need would well up out of the ground forever, oil-poor Dubai was dredging its harbour and encouraging imports. Now it’s a vital entrepot for former states of the Soviet Union: people hungry for everything, from plastic plates and toys to polyester clothing, much of it manufactured in the Far East and India.

The major part of this traffic is
containerised and handled by machines outside the city proper, in Port Rashid and Jebel Ali. The road out to these deep-water megaports is lined with private clinics: plastic surgeries catering to the city’s sun frazzled Jumeira Janes. Dubai maddens people. Never mind the heat. Since the global recession, the advertisements on Dubai’s innumerable hoardings and covering entire faces of its complete-but empty high-rises have grown even more extreme in their evocation of the city’s dream-logic.

“Live the Life.”

“We’ve set our vision higher.”

Even the white-goods retailers have names like Better Life and New Hope. Aspiration as an endless Jacob’s ladder. Perfect your car, your phone, your home, your face. Perfect your labia. Plastic surgeons line the way to erotically sterile encounters at the Burj al Arab hotel. Then what? Then where? The elevators only go up. Take a helicopter ride into the future. Cheat death. Chrome the flesh. Every advert I ride past features a robot. A perfected man. A smoothly milled thigh or tit. A cyborg sits at the wheel of the latest model four-wheel drive, limbs webbed promiscuously around the controls. A mobile phone blinks in a stainless steel hand.
The shipping business still fascinates me, and researching it continues to swallow the freelance budget. This will have to stop.

For a start, Dubai feeds fantasy. Every once in a while an attic in the Old Town, Deira, is raided by armed police. A man is led away, or sometimes shot. Every now and again, a headless corpse is found slumped in a pool of blood in one of the city’s many subterranean car parks, and children fishing in the creek watch as a heron rips beakfuls of hair from the remains of a human head. In Dubai, everyone is anonymous and significant at the same time. It’s Casablanca. It’s Buenos Aires. A loser’s paradise.

For another, my ill-financed pursuit of Sir John Schorne’s devil has taken new twist. He has swapped vehicles yet again. I missed this at the time, but while I was watching Crockett and Tubbs reruns, over 75,000 others were waking to one of YouTube’s more piquing viral crazes: a video of a man opening a box. Uploaded On 11 November 2006, this video – a so-bland-as-to-be transgressive record of a 60GB Japanese PS3 gaming console being removed from its cardboard and polystyrene packaging – is the earliest extant example we have of “unboxing”.

The Wall Street Journal rightly picked up on the genre’s striptease pedigree, but it missed, I think, the greater part of the video’s appeal. CheapyD Gets His PS3 – Unboxing is, above all, a comprehensive attempt to disarm the anxieties conjured by Jack’s unexpected springing from his box. In the video, a carton is opened slowly and steadily under controlled conditions to reveal exactly what the packaging said it would contain.

The cardboard carton is the disposable, end-user-friendly leaving of the shipping industry. Men who might once have tugged and trawled the world’s plenty about its rumbling yards with case hooks now haul goods ordered off the internet about our residential streets: careless and savage, they crush our precious parcels through diminutive letter boxes, miniscule slots cut for a different, more genteel age.

When people talk about shipping, they talk about goods. They talk about televisions and motorbikes and cars and toys and clothing and perfumes and whisky. The more informed discuss screws, pigments, paints, moulded plastics, rolls of leather, chopped-glass matting, bales of cotton, chemicals, dyes, yeasts, spores, seeds, acids and glues. The paranoid occasionally mention waste. The single biggest global cargo by volume is waste paper, closely followed by rags and shoes, soft-drinks cans, worn tyres, rebars and copper wire. The containers themselves escape notice.

They are the walls of our world. In less than a single generation, they have robbed us of two-thirds of the earth’s surface. The sea has been stolen away by vast corporations, complex algorithms, robot cities, and ships as big as towns. Everything ends up in a box these days and the world has become a kind of negative of itself: not an escape, but a trap.

The real appeal of CheapyD Gets His PS3 – Unboxing is that it doesn’t just disarm the jack-in-the-box game. It damn near reverses it. The contents of the box are irrelevant: a final bolus of processed matter for us to scrabble past as we struggle towards rebirth. We are all Jacks now, compressed by our own logistics, imprisoned in the rhetoric of war, crippled with anxiety, our brains rotten with it, and ready to spring, if only we could find the catch to this lid.

CheapyD’s next trick – five years, we’ve waited to see this, what is taking him so long? – will be to escape. To jump into his box, and disappear.

Who’s left in the glen?

Watching Emily Munro’s Living Proof: A climate story for New Scientist, 6 October 2021

Most environmental documentaries concentrate, on the environment. Most films about the climate crisis focus on people who are addressing the crisis.

Assembled and edited by Emily Munro, a curator of the moving image at the National Library of Scotland, Living Proof is different. It’s a film about demobbed soldiers and gamekeepers, architects and miners and American ex-pats. It’s about working people and their employers, about people whose day-to-day actions have contributed to the industrialisation of Scotland, its export of materials and methods (particularly in the field of off-shore oil and gas), and its not coincidental environmental footprint.

Only towards the end of Munro’s film do we meet protestors of any sort. They’re deploring the construction of a nuclear power plant at Torness, 33 miles east of Edinburgh. Even here, Munro is less interested in the protest itself, than in one impassioned, closely argued speech which, in the context of the film, completes an argument begun in Munro first reel (via a public information film from the mid-1940s) about the country’s political economy.

Assembled from propaganda and public information films, promotional videos and industrial reports, Living Proof is an archival history of what Scotland has told itself about itself, and how those stories, ambitions and visions have shaped the landscape, and effected the global environment.

Munro is in thrall to the changing Scottish industrial landscape, from its herring fisheries to its dams, from its slums and derelict mine-heads to the high modernism of its motorways and strip mills. Her vision is compelling and seductive. Living Proof is also — and this is more important — a film which respects its subjects’ changing aspirations. It tells the story of a poor, relatively undeveloped nation waking up to itself and trying to do right by its people.

It will come as no surprise, as Glasgow prepares to host the COP26 global climate conference, to hear that the consequences of those efforts have been anything but an unalloyed good. Powered by offshore oil and gas, home to Polaris nuclear missiles, and a redundancy-haunted grave for a dozen heavy industries (from coal-mining to ship-building to steel manufacture), Scotland is no-one’s idea of a green nation.

As Munro’s film shows, however, the environment was always a central plank of whatever argument campaigners, governments and developers made at the time. The idea that the Scots (and the rest of us) have only now “woken up to the environment” is a pernicious nonsense.

It’s simply that our idea of the environment has evolved.

In the 1940s, the spread of bog water, as the Highlands depopulated, was considered a looming environmental disaster, taking good land out of use. In the 1950s automation promised to pull working people out of poverty, disease and pollution. In the 1960s rapid communications were to serve an industrial culture that would tread ever more lightly over the mine-ravaged earth.

It’s with the advent of nuclear power, and that powerful speech on the beach at Torness, that the chickens come home to roost. That new nuclear plant is only going to employ around 500 people! What will happen to the region then?

This, of course, is where we came in: to a vision of a nation that, if cannot afford its own people, will go to rack and ruin, with (to quote that 1943 information film) “only the old people and a few children left in the glen”.

Living Proof critiques an economic system that, whatever its promises, can cannot help but denude the earth of its resources, and pauperise its people. It’s all the more powerful for being articulated through real things: schools and roads and pharmaceuticals, earth movers and oil rigs, washing machines and gas boilers.

Reasonable aspirations have done unreasonable harm to the planet. That’s the real crisis elucidated by Living Proof. It’s a point too easily lost in all the shouting. And it’s rarely been made so well.

“Grotesque, awkward, and disagreeable”

Reading Stanislaw Lem’s Dialogues for the Times, 5 October 2021

Some writers follow you through life. Some writers follow you beyond the grave. I was seven when Andrei Tarkovsky filmed Lem’s satirical sci-fi novel Solaris, thirty seven when Steven Soderbergh’s very different (and hugely underrated) Solaris came out, forty when Lem died. Since then, a whole other Stanslaw Lem has arisen, reflected in philosophical work that, while widely available elsewhere, had to wait half a century or more for an English translation. In life I have nursed many regrets: that I didn’t learn Polish is not the least of them.

The point about Lem is that he writes about the future, predicting the way humanity’s inveterate tinkering will enable, pervert and frustrate its ordinary wants and desires. This isn’t “the future of technology” or “the future of the western world” or “the future of the environment”. It’s neither “the future as the author would like it to be”, nor “the future if the present moment outstayed its welcome”. Lem knows a frightening amount of science, and even more about technology, but what really matters is what he knows about people. His writing is not just surprisingly prescient; it’s timeless.

Dialogues is about cybernetics, the science of systems. A system is any material arrangement that responds to environmental feedback. A steam engine is a mere mechanism, until you add the governor that controls its internal pressure. Then it becomes a system. When Lem was writing, systems thinking was meant to transform everything, conciliating between the physical sciences and the humanities to usher in a technocratic Utopia.

Enthusiastic as 1957-vintage Lem was, there is something deliciously levelling about how he introduces the cybernetic idea. We can bloviate all we like about using data and algorithms to create a better society; what drives Philonous and Hylas’s interest in these eight dialogues (modelled on Berkeley’s Three Dialogues of 1713) is Hylas’s desperate desire to elude Death. This new-fangled science of systems reimagines the world as information, and the thing about information is that it can be transmitted, stored and (best of all) copied. Why then can’t it transmit, store and copy poor Death-haunted Hylas?

Well, of course, that’s certainly do-able, Philonous agrees — though Hylas might find cybernetic immortality “grotesque, awkward, and disagreeable”. Sure enough, Hylas baulks at Philomous’s culminating vision of humanity immortalised in serried ranks of humming metal cabinets.

This image certainly was prescient: Cybernetics was supposed to be a philosophy, one that would profoundly change our understanding of the animate and inanimate world. The philosophy failed to catch on, but its insights created something utterly unexpected: the computer.

Dialogues is important now because it describes (or described, rather, more than half a century ago — you can almost hear Lem’s slow hand-clapping from the Beyond) all the ways we do not comprehend the world we have made.

Cybernetics teaches us that systems are animate. It doesn’t matter what a system is made from. Workers in an office, onse and zeroes clouding a chip, proteins folding and refolding in a living cell, string and pulleys in a playground: are all good building materials for systems, and once a system is up and running, it is no longer reducible to its parts. It’s a distinct, unified whole, shaped by its past history and actively coexisting with its environment, and exhibiting behavior that cannot be precisely predicted from its structure. “If you insist on calling this new system a mechanism,” Lem remarks, drily, “then you must apply that term to living beings as well.”

We’ve yet to grasp this nettle: that between the living and non-living worlds sits a world of systems, unalive yet animate. No wonder, lacking this insight, we spend half our lives sneering at the mechanisms we do understand (“Alexa, stop calling my Mum!”) and the other half on our knees, worshipping the mechanisms we don’t. (“It says here on Facebook…”) The very words we use — “artificial intelligence” indeed! — reveal the paucity of our understanding.

“Lem understood, as no-one then or since has understood, how undeserving of worship are the systems (be they military, industrial or social) that are already strong enough to determine our fate. A couple of years ago, around the time Hong Kong protesters were destroying facial recognition towers, a London pedestrian was fined £90 for hiding his face from an experimental Met camera. The consumer credit reporting company Experian uses machine learning to decide the financial trustworthiness of over a billion people. China’s Social Credit System (actually the least digitised of China’s surveillance systems) operates under multiple, often contradictory legal codes.

The point about Lem is not that he was terrifyingly smart (though he was that); it’s that he had skin in the game. He was largely self-taught, because he had to quit university after writing satirical pieces about Soviet poster-boy Trofim Lysenko (who denied the existence of genes). Before that, he was dodging Nazis in Lv’v (and mending their staff cars so that they would break down). In his essay “Applied Cybernetics: An Example from Sociology”, Lem uses the new-fangled science of systems to anatomise the Soviet thinking of his day, and from there, to explain how totalitarianism is conceived, spread and performed. Worth the price of the book in itself, this little essay is a tour de force of human sympathy and forensic fury, shorter than Solzhenitsyn, and much, much funnier than Hannah Arendt.

Peter Butko’s translations of the Dialogues, and the revisionist essays Lem added to the 1971 second edition, are as witty and playful as Lem’s allusive Polish prose demands. His endnotes are practically a book in themselves (and an entertaining one, too).

Translated so well, Lem needs no explanation, no contextualisation, no excuse-making. Lem’s expertise lay in technology, but his loyalty lay with people, in all their maddening tolerance for bad systems. “There is nothing easier than to create a state in which everyone claims to be completely satisfied,” he wrote; “being stretched on the bed, people would still insist — with sincerity — that their life is perfectly fine, and if there was any discomfort, the fault lay in their own bodies or in their nearest neighbor.”

 

If this is Wednesday then this must be Thai red curry with prawns

Reading Dan Saladino’s Eating to Extinction for the Telegraph, 26 September 2021

Within five minutes of my desk: an Italian delicatessen, a Vietnamese pho house, a pizzeria, two Chinese, a Thai, and an Indian “with a contemporary twist” (don’t knock it till you’ve tried it). Can such bounty be extended over the Earth?

Yes, it can. It’s already happening. And in what amounts to a distillation of a life’s work writing about food, and sporting a few predictable limitations (he’s a journalist; he puts stories in logical order, imagining this makes an argument) Dan Saladino’s Eating to Extinction explains just what price we’ll pay for this extraordinary achievement which promises, not only to end world hunger by 2030 (a much-touted UN goal), but to make California rolls available everywhere from to Kamchatka to Karachi.

The problem with my varied diet (if this is Wednesday then this must be Thai red curry with prawns) is that it’s also your varied diet, and your neighbour’s; it’s rapidly becoming the same varied diet across the whole world. You think your experience of world cuisine reflects global diversity? Humanity used to sustain itself (admittedly, not too well) on 6,000 species of plant. Now, for over three quarters of our calories, we gorge on just nine: rice, wheat and maize, potato, barley, palm oil and soy, sugar from beets and sugar from cane. The same narrowing can be found in our consumption of animals and seafood. What looks to us like the world on a plate is in fact the sum total of what’s available world-wide, now that we’ve learned to grow ever greater quantities of ever fewer foods.

Saladino is in the anecdote business; he travels the Earth to meet his pantheon of food heroes, each of whom is seen saving a rare food for our table – a red pea, a goaty cheese, a flat oyster. So far, so very Sunday supplement. Nor is there anything to snipe at in the adventures of, say, Woldemar Mammel who, searching in the attics of old farmhouses and in barns, rescued the apparently extinct Swabian “alb” lentil; nor in former chef Karlos Baca’s dedication to rehabilitating an almost wholly forgotten native American cuisine.
That said, it takes Saladino 450 pages (which is surely a good 100 pages too many) to explain why the Mammels and Bacas of this world are needed so desperately to save a food system that, far from beaking down, is feeding more and more food to more and more people.

The thing is, this system rests on two foundations: nitrogen fertiliser, and monocropping. The technology by which we fix nitrogen from the air by an industrial process is sustainable enough, or can be made so. Monocropping, on the other hand, was a dangerous strategy from the start.

In the 1910s and 1920s the Soviet agronomist Nikolai Vavilov championed the worldwide uptake of productive strains, with every plant a clone of its neighbour. How else, but by monocropping, do you feed the world? By the 1930s though, he was assembling the world’s first seed banks in a desperate effort to save the genetic diversity of our crops — species that monocropping was otherwise driving to extinction.

Preserving heritage strains matters. They were bred over thousands of years to resist all manner of local environmental pressures, from drought to deluge to disease. Letting them die out is the genetic equivalent of burning the library at Alexandria.

But seed banks can’t hold everything (there is, as Saladino remarks, no Svalbard seed vault for chickens) and are anyway a desperate measure. Saladino’s tale of how, come the Allied invasion, the holdings of Iraq’s national seed bank at Abu Ghraib was bundled off to Tel Hadya in Syria, only then to be frantically transferred to Lebanon, itself an increasingly unstable state, sounds a lot more more Blade Runner 2049 then Agronomy 101.

Better to create a food system that, while not necessarily promoting rare foods (fancy some Faroese air-fermented sheep meat? — thought not) will at least not drive such foods to extinction.

The argument is a little bit woolly here, as what the Faroe islanders get up to with their sheep is unlikely to have global consequences for the world’s food supply. Letting a crucial drought-resistant strain of wheat go extinct in a forgotten corner of Afghanistan, on the other hand, could have unimaginably dire consequences for us in the future.
Saladino’s grail is a food system with enough diversity in it to adapt to environmental change and withstand the onslaught of disease.

Is such a future attainable? Only to a point. Some wild foods are done for already because the high prices they command incentivize their destruction. If you want some of Baca’s prized and pungent bear root, native to a corner of Colorado, you’d better buy it now (but please, please don’t).

Rare cultivated foods stand a better chance. The British Middle White pig is rarer than the Himalayan snow leopard, says Saladino, but the stocks are sustainable enough that it is now being bred for the table.

Attempting to encompass the Sixth Extinction on the one hand, and the antics of slow-foodies like Mammel and Baca on the other is a recipe for cognitive dissonance. In the end, though, Saladino succeeds in mapping the enormity of what human appetite has done to the planet.

Saladino says we need to preserve rare and forgotten foods, partly because they are part of our cultural heritage, but also, and more hard-headedly, so that we can study and understand them, crossing them with existing lines to shore up and enrich our dangerously over-simplified food system. He’s nostalgic for our lost food past (and who doesn’t miss apples that taste of apples?) but he doesn’t expect us to delete Deliveroo and spend our time grubbing around for roots and berries.

Unless of course it’s all to late. It would not take many wheat blights or avian flu outbreaks before slow food is all that’s left to eat.

 

The tools at our disposal

Reading Index, A History of the, by Dennis Duncan, for New Scientist, 15 September 2021

Every once in a while a book comes along to remind us that the internet isn’t new. Authors like Siegfried Zielinski and Jussi Parikka write handsomely about their adventures in “media archaeology”, revealing all kinds of arcane delights: the eighteenth-century electrical tele-writing machine of Joseph Mazzolari; Melvil Dewey’s Decimal System of book classification of 1873.

It’s a charming business, to discover the past in this way, but it does have its risks. It’s all too easy to fall into complacency, congratulating the thinkers of past ages for having caught a whiff, a trace, a spark, of our oh-so-shiny present perfection. Paul Otlet builds a media-agnostic City of Knowledge in Brussels in 1919? Lewis Fry Richardson conceives a mathematical Weather Forecasting Factory in 1922? Well, I never!

So it’s always welcome when an academic writer — in this case London based English lecturer Dennis Duncan — takes the time and trouble to tell this story straight, beginning at the beginning, ending at the end. Index, A History of the is his story of textual search, told through charming portrayals of some of the most sophisticated minds of their era, from monks and scholars shivering among the cloisters of 13th-century Europe to server-farm administrators sweltering behind the glass walls of Silicon Valley.

It’s about the unspoken and always collegiate rivalry between two kinds of search: the subject index (a humanistic exercise, largely un-automatable, requiring close reading, independent knowledge, imagination, and even wit) and the concordance (an eminently automatable listing of words in a text and their locations).

Hugh of St Cher is the father of the concordance: his list of every word in the bible and its location, begun in 1230, was a miracle of miniaturisation, smaller than a modern paperback. It and its successors were useful, too, for clerics who knew their bibles almost by heart.

But the subject index is a superior guide when the content is unfamiliar, and it’s Robert Grosseteste (born in Suffolk around 1175) who we should thank for turning the medieval distinctio (an associative list of concepts, handy for sermon-builders), into something like a modern back-of-book index.

Reaching the present day, we find that with the arrival of digital search, the concordance is once again ascendant (the search function, Ctl-F, whatever you want to call it, is an automated concordance), while the subject index, and its poorly recompensed makers, are struggling to keep up in an age of reflowable screen text. (Sewing embedded active indexes through a digital text is an excellent idea which, exasperatingly, has yet to catch on.)

Running under this story is a deeper debate, between people who want to access their information quickly, and people (especially authors) who want people to read books from beginning to end.

This argument about how to read has been raging literally for millennia, and with good reason. There is clear sense in Socrates’ argument against reading itself, as recorded in Plato’s Phaedrus (370 BCE): “You have invented an elixir not of memory, but of reminding,” his mythical King Thamus complains. Plato knew a thing or two about the psychology of reading, too: people who just look up what they need “are for the most part ignorant,” says Thamus, “and hard to get along with, since they are not wise, but only appear wise.”

Anyone who spends too many hours a day on social media will recognise that portrait — if they have not already come to resemble it.

Duncan’s arbitration of this argument is a wry one. Scholarship, rather than being timeless and immutable, “is shifting and contingent,” he says, and the questions we ask of our texts “have a lot to do with the tools at our disposal.”

One courageous act

Watching A New World Order for New Scientist, 8 September 2021

“For to him that is joined to all the living there is hope,” runs the verse from Ecclesiastes, “for a living dog is better than a dead lion.”

Stefan Ebel plays Thomasz, the film’s “living dog”, a deserter who, more frightened than callous, has learned to look out solely for himself.

In the near future, military robots have turned against their makers. The war seems almost over. Perhaps Thomasz has wriggled and dodged his way to the least settled part of the planet (Daniel Raboldt’s debut feature is handsomely shot in Arctic Finland by co-writer Thorsten Franzen). Equally likely, this is what the whole planet looks like now: trees sweeping in to fill the spaces left by an exterminated humanity.

You might expect the script to make this point clear, but there is no script; rather, there is no dialogue. The machines (wasp-like drones, elephantine tripods, and one magnificent airborne battleship that that would not look out of place in a Marvel movie) target people by listening out for their voices; consequently, not a word can be exchanged between Thomasz and his captor Lilja, played by Siri Nase.

Lilja takes Thomasz prisoner because she needs his brute strength. A day’s walk away from the questionable safety of her log cabin home, there is a burned-out military convoy. Amidst the wreckage and bodies, there is a heavy case — and in the case, there is a tactical nuke. Lilja needs Thomasz’s help in dragging it to where she can detonate it, perhaps bringing down the machines. While Thomasz acts out of fear, Lilja is acting out of despair. She has nothing more to live for. While Thomasz wants to live at any cost, Lilja just wants to die. Both are reduced to using each other. Both will have to learn to trust again.

In 2018, John Krasinski’s A Quiet Place arrived in cinemas — a film in which aliens chase down every sound and slaughter its maker. This cannot have been a happy day for the devoted and mostly unpaid German enthusiasts working on A New World Order. But silent movies are no novelty, and theirs has clearly ploughed its own furrow. The film’s sound design, by Sebastian Tarcan, is especially striking, balancing levels so that even a car’s gear change comes across as an imminent alien threat. (Wonderfully, there’s an acknowledging nod to the BBC’s Tripods series buried in the war machines’ emergency signal.)

Writing good silent film is something of a lost art. It’s much easier for writers to explain their story through dialogue, than to propel it through action. Maybe this is why silent film, done well, is such a powerful experience. There is a scene in this movie where Thomasz realises, not only that he has to do the courageous thing, but that he is at last capable of doing it. Ebel, on his own on a scree-strewn Finnish hillside, plays the moment to perfection.

Somewhere on this independent film’s long and interrupted road to distribution (it began life on Kickstarter in 2016) someone decided “A Living Dog” was too obscure a film title for these godless times — a pity, I think, and not just because “A New World Order”, the title picked for UK distribution, manages to be at once pompous and meaningless.

Ebel’s pitch-perfect performance drips guilt and bad conscience. In order to stay alive, he has learned to crawl about the earth. But Lilja’s example, and his own conscience, will turn dog to lion at last, and in a genre that never tires of presenting us with hyper-capable heroes, it’s refreshing, on this occasion, to follow the forging of one courageous act.

“This stretch-induced feeling of awe activates our brain’s spiritual zones”

Reading Angus Fletcher’s Wonderworks: Literary invention and the science of stories for New Scientist, 1 September 2021

Can science explain art?

Certainly: in 1999 the British neurobiologist Semir Zeki published Inner Vision, an illuminating account of how, through trial and error and intuition, different schools of art have succeeded in mapping the neurological architectures of human vision. (Put crudely, Rembrandt tickles one corner of the brain, Piet Mondrian another.)

Twelve years later, Oliver Sacks contributed to an already crowded music psychology shelf with Musicophilia, a collection of true tales in which neurological injuries and diseases are successfully treated with music.

Angus Fletcher believes the time has come for drama, fiction and literature generally to succumb to neurological explanation. Over the past decade, neuroscientists have been using pulse monitors, eye-trackers, brain scanners “and other gadgets” to look inside our heads as we consume novels, poems, films, and comic books. They must have come up with some insights by now.

Fletcher’s hypothesis is that story is a technology, which he defines as “any human-made thing that helps to solve a problem”.

This technology has evolved, over at least the last 4000 years, to help us negotiate the human condition, by which Fletcher means our awareness of our own mortality, and the creeping sense of futility it engenders. Story is “an invention for overcoming the doubt and the pain of just being us”.

Wonderworks is a scientific history of literature; each of its 25 chapters identifies a narrative “tool” which triggers a different, traceable, evidenced neurological outcome. Each tool comes with a goofy label: here you will encounter Butterfly Immersers and Stress Transformers, Humanity Connectors and Gratitude Multipliers.

Don’t sneer: these tools have been proven “to alleviate depression, reduce anxiety, sharpen intelligence, increase mental energy, kindle creativity, inspire confidence, and enrich our days with myriad other psychological benefits.”

Now, you may well object that, just as area V1 of the visual cortex did not evolve so we could appreciate the paintings of Piet Mondrian, so our capacity for horror and pity didn’t arise just so we could appreciate Shakespeare. So if story is merely “holding a mirror up to nature”, then Fletcher’s long, engrossing book wouldn’t really be saying anything.

As any writer will tell you, of course, a story isn’t merely a mirror. The problem comes when you try and make this perfectly legitimate point using neuroscience.

Too often for comfort, and as the demands of concision exceed all human bounds, the reader will encounter passages like: “This stretch-induced feeling of awe activates our brain’s spiritual zones, enriching our consciousness with the sensation of meanings beyond.”

Hitting sentences like this, I normally shut the book, with some force. I stayed my hand on this occasion because, by the time this horror came to light, two things were apparent. First, Fletcher — a neuroscientist turned story analyst — actually does know his neurobiology. Second, he really does know his literature, making Wonderworks a profound and useful guide to reading for pleasure.

Wonderworks fails as popular science because of the extreme parsimony of Fletcher’s explanations; fixing this problem would, however, have involved composing a multi-part work, and lost him his general audience.

The first person through the door is the one who invariably gets shot. Wonderworks is in many respects a pug-ugly book. But it’s also the first of its kind: an intelligent, engaged, erudite attempt to tackle, neurologically, not just some abstract and simplified “story”, but some the world’s greatest literature, from the Iliad to The Dream of the Red Chamber, from Disney’s Up to the novels of Elena Ferrante.

It is easy to get annoyed with this book. But those who stay calm will reap a rich harvest.

A cherry is a cherry is a cherry

Life is Simple: How Occam’s Razor Sets Science Free and Shapes the Universe
by Johnjoe McFadden, reviewed for the Spectator, 28 August 2021

Astonishing, where an idea can lead you. You start with something that, 800 years hence, will sound like it’s being taught at kindergarten: Fathers are fathers, not because they are filled with some “essence of fatherhood”, but because they have children.

Fast forward a few years, and the Pope is trying to have you killed.

Not only have you run roughshod over his beloved eucharist (justified, till then, by some very dodgy Aristotelian logic-chopping); you’re also saying there’s no “essence of kinghood”, neither. If kings are only kings because they have subjects, then, said William of Occam, “power should not be entrusted to anyone without the consent of all”. Heady stuff for 1334.

How this progression of thought birthed the very idea of modern science, is the subject of what may be the most sheerly enjoyable history of science of recent years.

William was born around 1288 in the little town of Ockham in Surrey. He was probably an orphan; at any rate he was given to the Franciscan order around the age of eleven. He shone at Greyfriars in London, and around 1310 was dispatched to Oxford’s newfangled university.

All manner of intellectual, theological and political shenanigans followed, mostly to do with William’s efforts to demolish almost the entire edifice of medieval philosophy.

It needed demolishing, and that’s because it still held to Aristotle’s ideas about what an object is. Aristotle wondered how single objects and multiples can co-exist. His solution: categorise everything. A cherry is a cherry is a cherry, and all cherries have cherryness in common. A cherry is a “universal”; the properties that might distinguish one cherry from another are “accidental”.

The trouble with Aristotle’s universals, though, is that they assume a one-to-one correspondence between word and thing, and posit a universe made up of a terrifying number of unique things — at least one for each noun or verb in the language.

And the problem with that is that it’s an engine for making mistakes.

Medieval philosophy relied largely on syllogistic reasoning, juggling things into logical-looking relations. “Socrates is a man, all men are mortal, so Socrates is mortal.”

So he is, but — and this is crucial — this conclusion is arrived at more by luck than good judgement. The statement isn’t “true” in any sense; it’s merely internally consistent.

Imagine we make a mistake. Imagine we spring from a society where beards are pretty much de rigeur (classical Athens, say, or Farringdon Road). Imagine we said, “Socrates is a man, all men have beards, therefore Socrates has a beard”?

Though one of its premises is wrong, the statement barrels ahead regardless; it’s internally consistent, and so, if you’re not paying attention, it creates the appearance of truth.

But there’s worse: the argument that gives Socates a beard might actually be true. Some men do have beards. Socrates may be one of them. And if he is, that beard seems — again, if you’re not paying attention — to confirm a false assertion.

William of Occam understood that our relationship with the world is a lot looser, cloudier, and more indeterminate than syllogistic logic allows. That’s why, when a tavern owner hangs a barrel hoop outside his house, passing travellers know they can stop there for a drink. The moment words are decoupled from things, then they act as signs, negotiating flexibly with a world of blooming, buzzing confusion.

Once we take this idea to heart, then very quickly — and as a matter of taste more than anything — we discover how much more powerful straightforward explanations are than complicated ones. Occam came up with a number of versions of what even then was not an entirely new idea: “It is futile to do with more what can be done with less,” he once remarked. Subsequent formulations do little but gild this lily.

His idea proved so powerful, three centuries later the French theologian Libert Froidmont coined the term “Occam’s razor”, to describe how we arrive at good explanations by shaving away excess complexity. As McFadden shows, that razor’s still doing useful work.

Life is Simple is primarily a history of science, tracing William’s dangerous idea through astronomy, cosmology, physics and biology, from Copernicus to Brahe, Kepler to Newton, Darwin to Mendel, Einstein to Noether to Weyl. But McFadden never loses sight of William’s staggering, in some ways deplorable influence over the human psyche as a whole. For if words are independent of things, how do we know what’s true?

Thanks to William of Occam, we don’t. The universe, after Occam, is unknowable. Yes, we can come up with explanations of things, and test them against observation and experience; but from here on in, our only test of truth will be utility. Ptolemy’s 2nd-century Almagest, a truly florid description of the motions of the stars and planetary paths, is not and never will be *wrong*; the worst we can say is that it’s overcomplicated.

In the Coen brothers’ movie The Big Lebowski, an exasperated Dude turns on his friend: “You’re not *wrong*, Walter” he cries, “you’re just an asshole.” William of Occam is our universal Walter, and the first prophet of our disenchantment. He’s the friend we wish we’d never listened to, when he told us Father Christmas was not real.

The Art of Conjecturing

Reading Katy Börner’s Atlas of Forecasts: Modeling and mapping desirable futures for New Scientist, 18 August 2021

My leafy, fairly affluent corner of south London has a traffic congestion problem, and to solve it, there’s a plan to close certain roads. You can imagine the furore: the trunk of every kerbside tree sports a protest sign. How can shutting off roads improve traffic flows?

The German mathematician Dietrich Braess answered this one back in 1968, with a graph that kept track of travel times and densities for each road link, and distinguished between flows that are optimal for all cars, and flows optimised for each individual car.

On a Paradox of Traffic Planning is a fine example of how a mathematical model predicts and resolves a real-world problem.

This and over 1,300 other models, maps and forecasts feature in the references to Katy Börner’s latest atlas, which is the third to be derived from Indiana University’s traveling exhibit Places & Spaces: Mapping Science.

Atlas of Science: Visualizing What We Know (2010) revealed the power of maps in science; Atlas of Knowledge: Anyone Can Map (2015), focused on visualisation. In her third and final foray, Börner is out to show how models, maps and forecasts inform decision-making in education, science, technology, and policymaking. It’s a well-structured, heavyweight argument, supported by descriptions of over 300 model applications.

Some entries, like Bernard H. Porter’s Map of Physics of 1939, earn their place thanks purely to their beauty and for the insights they offer. Mostly, though, Börner chooses models that were applied in practice and made a positive difference.

Her historical range is impressive. We begin at equations (did you know Newton’s law of universal gravitation has been applied to human migration patterns and international trade?) and move through the centuries, tipping a wink to Jacob Bernoulli’s “The Art of Conjecturing” of 1713 (which introduced probability theory) and James Clerk Maxwell’s 1868 paper “On Governors” (an early gesture at cybernetics) until we arrive at our current era of massive computation and ever-more complex model building.

It’s here that interesting questions start to surface. To forecast the behaviour of complex systems, especially those which contain a human component, many current researchers reach for something called “agent-based modeling” (ABM) in which discrete autonomous agents interact with each other and with their common (digitally modelled) environment.

Heady stuff, no doubt. But, says Börner, “ABMs in general have very few analytical tools by which they can be studied, and often no backward sensitivity analysis can be performed because of the large number of parameters and dynamical rules involved.”

In other words, an ABM model offers the researcher an exquisitely detailed forecast, but no clear way of knowing why the model has drawn the conclusions it has — a risky state of affairs, given that all its data is ultimately provided by eccentric, foible-ridden human beings.

Börner’s sumptuous, detailed book tackles issues of error and bias head-on, but she left me tugging at a still bigger problem, represented by those irate protest signs smothering my neighbourhood.

If, over 50 years since the maths was published, reasonably wealthy, mostly well-educated people in comfortable surroundings have remained ignorant of how traffic flows work, what are the chances that the rest of us, industrious and preoccupied as we are, will ever really understand, or trust, all the many other models which increasingly dictate our civic life?

Borner argues that modelling data can counteract misinformation, tribalism, authoritarianism, demonization, and magical thinking.

I can’t for the life of me see how. Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” What happens when a model reaches such complexity, only an expert can really understand it, or when even the expert can’t be entirely sure why the forecast is saying what it’s saying?

We have enough difficulty understanding climate forecasts, let alone explaining them. To apply these technologies to the civic realm begs a host of problems that are nothing to do with the technology, and everything to do with whether anyone will be listening.