More than the naughty world deserves

Reading Wikipedia @ 20, edited by Joseph Reagle and Jackie Koerner, for the Telegraph, 10 January 2021

In 2015 the US talk show host and comedian Stephen Colbert coined “truthiness”, one of history’s more chilling buzzwords. “We’re not talking about truth,” he declared, “we’re talking about something that seems like truth — the truth we want to exist.”

Colbert thought the poster-boy for our disinformation culture would be Wikipedia, the open-source internet encyclopedia started, more or less as an afterthought, by Jimmy Wales and Larry Sanger in 2001. If George Washington’s ownership of slaves troubles you, Colbert suggested “bringing democracy to knowledge” by editing his Wikipedia page.

Three years later the magazine Atlantic was calling Wikipedia “the last bastion of shared reality in Trump’s America”. Yes, its coverage is lumpy, idiosyncratic, often persnickety , and not terribly well written. But it’s accurate to a fault, extensive beyond all imagining, and energetically policed. (Wikipedia nixes toxic user content within minutes. Why can’t YouTube? Why can’t Twitter?)

Editors Joseph Reagle and Jackie Koerner — both energetic Wikipedians — know better than to go hunting for Wikipedia’s secret sauce. (A community adage goes that Wikipedia always works better in practice than in theory.) They neither praise nor blame Wikipedia for what it has become, but — and this comes across very strongly indeed — they love it with a passion. The essays they have selected for this volume (you can find the full roster of contributions on-line) reflect, always readably and almost always sympathetically, on the way this utopian project has bedded down in the flaws of the real world.

Wikipedia says it exists “to benefit readers by acting as an encyclopedia, a comprehensive written compendium that contains information on all branches of knowledge”. Improvements are possible. Wikipedia is shaped by the way its unvetted contributors write about what they know and delete what they do not. That women represent only about 12 per cent of the editing community is, then, not ideal.

Harder to correct is the wrinkle occasioned by language. Wikipedias written in different languages are independent of each other. There might not be anything actually wrong, but there’s certainly something screwy about the way India, Australia, the US and the UK and all the rest of the Anglophone world share a single English-language Wikipedia, while only the Finns get to enjoy the Finnish one. And it says something (obvious) about the unevenness of global development that Hindi speakers (the third largest language group in the world) read a Wikipedia that’s 53rd in a ranking of size.

To encyclopedify the world is an impossible goal. Surely the philosophes of eighteenth century France knew that much when they embarked on their Encyclopédie. Paul Otlet’s Universal Repertory and H. G. Wells’s World Brain were similarly Quixotic.

Attempting to define Wikipedia through its intellectual lineage may, however, be to miss the point. In his stand-out essay “Wikipedia As A Role-Playing Game” Dariusz Jemielniak (author of the first ethnography of Wikipedia, Common Knowledge?, in 2014) stresses the playfulness of the whole enterprise. Why else, he asks, would academics avoid it? “”When you are a soldier, you do not necessarily spend your free time playing paintball with friends.”

Since its inception, pundits have assumed that it’s Wikipedia’s reliance on the great mass of unwashed humanity — sorry, I mean “user-generated content” — that will destroy it. Contributor Heather Ford, a South African open source activist, reckons it’s not its creators that will eventually ruin Wikipedia but its readers — specifically, data aggregation giants like Google, Amazon and Apple, who fillet Wikipedia content and disseminate it through search engines like Chrome and personal assistants like Alexa and Siri. They have turned Wikipedia into the internet’s go-to source of ground truth, inflating its importance to an unsustainable level.

Wikipedia’s entries are now like swords of Damocles, suspended on threads over the heads of every major commercial and political actor in the world. How long before the powerful find a way to silence this capering non-profit fool, telling motley truths to power? As Jemielniak puts it, “”A serious game that results in creating the most popular reliable knowledge source in the world and disrupts existing knowledge hierarchies and authority, all in the time of massive anti-academic attacks — what is there not to hate?”

Though one’s dislike of Wikipedia needn’t spring from principles or ideas or even self-interest. Plain snobbery will do. Wikipedia has pricked the pretensions of the humanities like no other cultural project. Editor Joseph Reagle discovered as much ten years ago in email conversation with founder Jimmy Wales (a conversation that appears in Good Faith Collaboration, Reagle’s excellent, if by now slightly dated study of Wikipedia). “One of the things that I noticed,” Wales wrote, “is that in the humanities, a lot of people were collaborating in discussions, while in programming… people weren’t just talking about programming, they were working together to build things of value.”

This, I think, is what sticks in the craw of so many educated naysayers: that while academics were busy paying each other for the eccentricity of their beautiful opinions, nerds were out in the world winning the culture wars; that nerds stand ready on the virtual parapet to defend us from truthy, Trumpist oblivion; that nerds actually kept the promise held out by the internet, and turned it into the fifth biggest site on the Web.

Wikipedia’s guidelines to its editors include “Assume Good Faith” and “Please Do Not Bite the Newcomers.” This collection suggests to me that this is more than the naughty world deserves.

An engine for understanding

Reading Fundamentals by Frank Wilczek for the Times, 2 January 2021

It’s not given to many of us to work at the bleeding edge of theoretical physics, discovering for ourselves the way the world really works.

The nearest most of us will ever get is the pop-science shelf, and this has been dominated for quite a while now by the lyrical outpourings of Italian theoretical physicist Carlo Rovelli. Rovelli’s upcoming one, Helgoland, promises to have his reader tearing across a universe made, not of particles, but of the relations between them.

It’s all too late, however: Frank Wilczek’s Fundamentals has gazzumped Rovelli handsomely, with a vision that replaces our classical idea of physical creation — “atoms and the void” — with one consisting entirely of spacetime, self-propagating fields and properties.

Born in 1951 and awarded the Nobel Prize in Physics in 2004 for figuring out why atoms don’t just fly apart, Wilczek is out to explain why “the history of Sweden is more complicated than the history of the universe”. The ingredients of the universe are surprisingly simple, but their fates, playing out through time in accordance with just a handful of rules, generate a world of unimaginable complexity, contingency and abundance. Measures of spin, charge and mass allow us to describe the whole of physical reality, but they won’t help us at all in depicting, say, the history of the royal house of Bernadotte.

Wilczek’s “ten keys to reality”, mentioned in his subtitle, aren’t to do with the 19 or so physical constants that exercised Martin Rees, the UK’s Astronomer Royal, in his 1990s pop-science heyday. The focus these days has shifted more to the spirit of things. When Wilczek describes the behaviour of electrons around an atom, for example, gone are the usual Böhr-ish mechanics, in which electrons leap from one nuclear orbit to another. Instead we get a vibrating cymbal, the music of the spheres, a poetic understanding of fields, and not a fragment of matter in sight.

So will you plump for the Wilzcek, or will you wait for the Rovelli? A false choice, of course; this is not a race. Popular cosmology is more like the jazz scene: the facts (figures, constants, models) are the standards everyone riffs off. After one or two exposures you find yourself returning for the individual performances: their poetry, their unique expression.

Wilczek’s ten keys are more like ten book ideas, exploring the spatial and temporal abundance of the universe; how it all began; the stubborn linearity of time; how it all will end. What should we make of his decision to have us swallow the whole of creation in one go?

In one respect this book was inevitable. It’s what people of Wilczek’s peculiar genius and standing do. There’s even a sly name for the effort: the philosopause. The implication here being that Wilczek has outlived his most productive years and is now pursuing philosophical speculations.

Wilzcek is not short of insights. His idea of what the scientific method consists of is refreshingly robust: a style of thinking that “combines the humble discipline of respecting the facts and learning from Nature with the systematic chutzpah of using what you think you’ve learned aggressively”. If you apply what you think you’ve discovered everywhere you can, even in situations that have nothing to do with your starting point, then, if it works, “you’ve discovered something useful; it it doesn’t, then you’ve learned something important.”

However, works of the philosopause are best judged on character. Richard Dawkins seems to have discovered, along with Johnny Rotten, that anger is an energy. Martin Rees has been possessed by the shade of that dutiful bureaucrat C P Snow. And in this case? Wilczek, so modest, so straight-dealing, so earnest in his desire to conciliate between science and the rest of culture, turns out to be a true visionary, writing — as his book gathers pace — a human testament to the moment when the discipline of physics, as we used to understand it, came to a stop.

Wilczek’s is the first generation whose intelligence — even at the far end of the bell-curve inhabited by genius — is insufficient to conceptualise its own scientific findings. Machines are even now taking over the work of hypothesis-making and interpretation. “The abilities of our machines to carry lengthy yet accurate calculations, to store massive amounts of information, and to learn by doing at an extremely fast pace,” Wilczek explains, “are already opening up qualitatively new paths toward understanding. They will move the frontier of knowledge in directions, and arrive at places, that unaided human brains can’t go.”

Or put it this way: physicists can pursue a Theory of Everything all they like. They’ll never find it, because if they did find it, they wouldn’t understand it.

Where does that leave physics? Where does that leave Wilczek? His response is gloriously matter-of-fact:

“… really, this should not come as fresh news. Humans themselves know many things that are not available to human consciousness, such as how to process visual information at incredible speeds, or how to make their bodies stay upright, walk and run.”

Right now physicists have come to the conclusion that the vast majority of mass in the universe reacts so weakly to the bits of creation we can see, we may never know its nature. Though Wilczek makes a brave stab at the problem of so-called “dark matter”, he is equally prepared to accept that a true explanation may prove incomprehensible.

Human intelligence turns out to be just one kind of engine for understanding. Wilzcek would have us nurture it and savour it, and not just for what it can do, but because it is uniquely ours.

The seeds of indisposition

Reading Ageless by Andrew Steele for the Telegraph, 20 December 2020

The first successful blood transfusions were performed in 1650, by the English physician Richard Lower, on dogs. The idea, for some while, was not that transfusions would save lives, but that they might extend them.

Turns out they did. The Philosophical Transactions of the Royal Society mentions an experiment in which “an old mongrel curr, all over-run with the mainge” was transfused with about fifteen ounces of of blood from a young spaniel and was “perfectly cured.”

Aleksandr Bogdanov, who once vied with Vladimir Lenin for control of the Bolsheviks (before retiring to write science fiction novels) brought blood transfusion to Russia, and hoped to rejuvenate various exhausted colleagues (including Stalin) by the method. On 24 March 1928 he mutually transfused blood with a 21-year-old student, suffered a massive transfusion reaction, and died, two weeks later, at the age of fifty-four.

Bogdanov’s theory was stronger than his practice. His essay on ageing speaks a lot of sense. “Partial methods against it are only palliative,” he wrote, “they merely address individual symptoms, but do not help fight the underlying illness itself.” For Bogdanov, ageing is an illness — unavoidable, universal, but no more “normal” or “natural” than any other illness. By that logic, ageing should be no less invulnerable to human ingenuity and science. It should, in theory, be curable.

Andrew Steele agrees. Steele is an Oxford physicist who switched to computational biology, drawn by the field of biogerontology — or the search for a cure for ageing. “Treating ageing itself rather than individual diseases would be transformative,” he writes, and the data he brings to this argument is quite shocking. It turns out that curing cancer would add less than three years to a person’s typical life expectancy, and curing heart disease, barely two, as there are plenty of other diseases waiting in the wings.

Is ageing, then, simply a statistical inevitability — a case of there always being something out there that’s going to get us?

Well, no. In 1825 Benjamin Gompertz, a British mathematician, explained that there are two distinct drivers of human mortality. There are extrinsic events, such as injuries or diseases. But there’s also an internal deterioration — what he called “the seeds of indisposition”.

It’s Steele’s job here to explain why we should treat those “seeds” as a disease, rather than a divinely determined limit. In the course of that explanation Steele gives us, in effect, a tour of the whole of human biology. It’s an exhilarating journey, but by no means always a pretty one: a tale of senescent cells, misfolded proteins, intracellular waste and reactive metals. Readers of advanced years, wondering why their skin is turning yellow, will learn much more here than they bargained for.

Ageing isn’t evolutionarily useful; but because it comes after our breeding period, evolution just hasn’t got the power to do anything about it. Mutations whose negative effects occur late in our lives accumulate in the gene pool. Worse, if they had a positive effect on our lives early on, then they will be actively selected for. Ageing, in other words, is something we inherit.

It’s all very well conceptualising old age as one disease. But if your disease amounts to “what happens to a human body when 525 million years of evolution stop working”, then you’re reduced to curing everything that can possibly go wrong, with every system, at once. Ageing, it turns out, is just thousands upon thousands of “individual symptoms”, arriving all at once.

Steele believes the more we know about human biology, the more likely it is we’ll find systemic ways to treat these multiple symptoms. The challenge is huge, but the advances, as Steele describes them, are real and rapid. If, for example, we can persuade senescent cells to die, then we can shed the toxic biochemical garbage they accumulate, and enjoy once more all the benefits of (among other things) young blood. This no fond hope: human trials of senolytics started in 2018.

Steele is a superb guide to the wilder fringes of real medicine. He pretends to nothing else, and nothing more. So whether you find Ageless an incredibly focused account, or just an incredibly narrow one, will come down, in the end, to personal taste.

Steele shows us what happens to us biologically as we get older — which of course leaves a lot of blank canvas for the thoughtful reader to fill. Steele’s forebears in this (frankly, not too edifying) genre have all to often claimed that there are no other issues to tackle. In the 1930s the surgeon Alexis Carrel declared that “Scientific civilization has destroyed the world of the soul… Only the strength of youth gives the power to satisfy physiological appetites and to conquer the outer world”.


And he wasn’t the only one. Books like Successful Aging (Rowe & Kahn, 1998) and How and Why We Age (Hayflick, 1996) aspire to a sort of overweaning authority, not by answering hard questions about mortality, long life and ageing, but merely by denying a gerontological role for anyone outside their narrow specialism: philosophers, historians, theologians, ethicists, poets — all are shown the door.

Steele is much more sensible. He simply sticks to his subject. To the extent that he expresses a view, I am confident that he understands that ageing is an experience to be lived meaningfully and fully, as well as a fascinating medical problem to be solved.

Steele’s vision is very tightly controlled: he wants us to achieve “negligible senescence”, in which, as we grow older, we suffer no obvious impairments. What he’s after is a risk of death that stays constant no matter how old we get. This sounds fanciful, but it does happen in nature. Giant tortoises succumb to statistical inevitability, not decrepitude.

I have a fairly entrenched problem with books that treat ageing as a merely medical phenomenon. But I heartily recommend this one. It’s modest in scope, and generous in detail. It’s an honest and optimistic contribution to a field that tips very easily indeed into Tony Stark-style boosterism.

Life expectancy in the developed world has doubled from 40 in the 1800s to over 80 today. But it is in our nature to be always craving for more. One colourful outfit called Ambrosia is offering anyone over 35 the opportunity to receive a litre of youthful blood plasma for $8000. Steele has some fun with this: “At the time of writing,” he tells us, “a promotional offer also allows you to get two for $12000 — buy one, get one half-price.”

Soaked in ink and paint

Reading Dutch Light: Christiaan Huygens and the making of science in Europe
by Hugh Aldersey-Williams for the Spectator, 19 December 2020

This book, soaked, like the Dutch Republic itself, “in ink and paint”, is enchanting to the point of escapism. The author calls it “an interior journey, into a world of luxury and leisure”. It is more than that. What he says of Huygen’s milieu is true also of his book: “Like a ‘Dutch interior’ painting, it turns out to contain everything.”

Hugh Aldersey-Williams says that Huygens was the first modern scientist. This is a delicate argument to make — the word “scientist” didn’t enter the English language before 1834. And he’s right to be sparing with such rhetoric, since a little of it goes a very long way. What inadvertent baggage comes attached, for instance, to the (not unreasonable) claim that the city of Middleburg, supported by the market for spectacles, became “a hotbed of optical innovation” at the end of the 16th century? As I read about the collaboration between Christiaan’s father Constantijn (“with his trim dark beard and sharp features”) and his lens-grinder Cornelis Drebbel (“strapping, ill-read… careless of social hierarchies”) I kept getting flashbacks to the Steve Jobs and Steve Wozniak double-act in Aaron Sorkin’s film.

This is the problem of popular history, made double by the demands of explaining the science. Secretly, readers want the past to be either deeply exotic (so they don’t have to worry about it) or fundamentally familiar (so they, um, don’t have to worry about it).

Hugh Aldersey-Williams steeps us in neither fantasy for too long, and Dutch Light is, as a consequence, an oddly disturbing read: we see our present understanding of the world, and many of our current intellectual habits, emerging through the accidents and contingencies of history, through networks and relationships, friendships and fallings-out. Huygens’s world *is* distinctly modern — disturbingly so: the engine itself, the pipework and pistons, without any of the fancy fairings and decals of liberalism.

Trade begets technology begets science. The truth is out there but it costs money. Genius can only swim so far up the stream of social prejudice. Who your parents are matters.

Under Dutch light — clean, caustic, calvinistic — we see, not Enlightenment Europe emerging into the comforts of the modern, but a mirror in which we moderns are seen squatting a culture, full of flaws, that we’ve never managed to better.

One of the best things about Aldersey-Williams’s absorbing book (and how many 500-page biographies do you know feel too short when you finish them?) is the interest he shows in everyone else. Christiaan arrives in the right place, in the right time, among the right people, to achieve wonders. His father, born 1596 was a diplomat, architect, poet (he translated John Donne) and artist (he discovered Rembrandt). His longevity exasperated him: “Cease murderous years, and think no more of me” he wrote, on his 82nd birthday. He lived eight years more. But the space and energy Aldersey-Williams devotes to Constantijn and his four other children — “a network that stretched across Europe” — is anything but exasperating. It immeasurably enriches our idea of Christiaan’s work meant, and what his achievements signified.

Huygens worked at the meeting point of maths and physics, at a time when some key physical aspects of reality still resisted mathematical description. Curves provide a couple of striking examples. The cycloid is the path made by a point on the circumference of a turning wheel. The catenary is the curve made by a chain or rope hanging under gravity. Huygens was the first to explain these curves mathematically, doing more than most to embed mathematics in the physical sciences. He tackled problems in geometry and probability, and had some fun in the process (“A man of 56 years marries a woman of 16 years, how long can they live together without one or the other dying?”) Using telescopes he designed and made himself, he discovered Saturn’s ring system and its largest moon, Titan. He was the first to describe the concept of centrifugal force. He invented the pendulum clock.

Most extraordinary of all, Huygens — though a committed follower of Descartes (who was once a family friend) — came up with a model of light as a wave, wholly consistent with everything then known about the nature of light apart from colour, and streets ahead of the “corpuscular” theory promulgated by Newton, which had light consisting of a stream of tiny particles.

Huygens’s radical conception of light seems even stranger, when you consider that, as much as his conscience would let him, Huygens stayed faithful to Descartes’ vision of physics as a science of bodies in collision. Newton’s work on gravity, relying as it did on an unseen force, felt like a retreat to Huygens — a step towards occultism.

Because we turn our great thinkers into fetishes, we allow only one per generation. Newton has shut out Huygens, as Galileo shut out Kepler. Huygens became an also-ran in Anglo-Saxon eyes; ridiculous busts of Newton, meanwhile, were knocked out to adorn the salons of Britain’s country estates, “available in marble, terracotta and plaster versions to suit all pockets.”

Aldersey-Williams insists that this competition between the elder Huygens and the enfant terrible Newton was never so cheap. Set aside their notorious dispute over calculus, and we find the two men in lively and, yes, friendly correspondence. Cooperation and collaboration were on the rise: “Gone,” Aldersey-Williams writes, “is the quickness to feel insulted and take umbrage that characterised so many exchanges — domestic as well as international — in the early days of the French and English academies of science.”

When Henry Oldenburg, the prime mobile of the Royal Society, died suddenly in 1677, a link was broken between scientists everywhere, and particularly between Britain and the continent. The 20th century did not forge a culture of international scientific cooperation. It repaired the one Oldenburg and Huygens had built over decades of eager correspondence and clever diplomacy.

Run for your life

Watching Gints Zilbalodis’s Away for New Scientist, 18 November 2020

A barren landscape at sun-up. From the cords of his deflated parachute, dangling from the twisted branch of a dead tree, a boy slowly wakes to his surroundings, just as a figure appears out of the dawn’s dreamy desert glare. Humanoid but not human, faceless yet somehow inexpressibly sad, the giant figure shambles towards the boy and bends and, though mouthless, tries somehow to swallow him.

The boy unclips himself from his harness, falls to the sandy ground, and begins to run. The strange, slow, gripping pursuit that follows will, in the space of an hour and ten minutes, tell the story of how the boy comes to understand the value of life and friendship.

That the monster is Death is clear from the start: not a ravenous ogre, but unstoppable and steady. It swallows, without fuss or pain, the lives of any creature it touches. Perhaps the figure pursuing the boy is not a physical threat at all, but more the dawning of a terrible idea — that none of us lives forever. (In one extraordinary dream sequence, we see the boy’s fellow air passengers plummet from the sky, each one rendered as a little melancholy incarnation of the same creature.)

Away is the sole creation of 26-year-old Latvian film-maker Gints Zilbalodis, and it’s his first feature-length animation. Zabalodis is Away’s director, writer, animator, editor, and even composed its deceptively simple synth score — a constant back-and-forth between dread and wonder.

There’s no shading in Zabalodis’s CGI-powered animation, no outlining, and next to no texture, and the physics is rudimentary. When bodies enter water, there’s no splash: instead, deep ripples shimmer across the screen. A geyser erupts, and water rises and falls against itself in a churn of massy, architectonic white blocks. What drives this strange retro, gamelike animation style?

Away feels nostalgic at first, perhaps harking back to the early days of videogames, when processing speeds were tiny, and a limited palette and simplified physics helped players explore game worlds in real time. Indeed the whole film is structured like a game, with distinct chapters and a plot arranged around simple physical and logical puzzles. The boy finds a haversack, a map, a water canteen, a key and a motorbike. He finds a companion — a young bird. His companion learns to fly, and departs, and returns. The boy runs out of water, and finds it. He meets turtles, birds, and cats. He wins a major victory over his terrifying pursuer, only to discover that the victory is temporary. By the end of the film, it’s the realistic movies that seem odd, the big budget animations, the meticulously composited Nolanesque behemoths. Even dialogue feels clumsy and lumpen, after 75 minutes of Away’s impeccable, wordless storytelling.

Away reminds us that when everything in the frame and on the soundtrack serves the story, then the elements themselves don’t have to be remarkable. They can be simple and straightforward: fields of a single colour, a single apposite sound-effect, the tilt of a simply drawn head.

As CGI technology penetrates the prosumer market, and super-tool packages like Maya become affordable, or at any rate accessible through institutions, then more artists and filmmakers are likely to take up the challenge laid down by Away, creating, all by themselves, their own feature-length productions.

Experiments of this sort — ones that change the logistics and economies of film production — are often ugly. The first films were virtually unfollowable. The first sound films were dull and stagey. CGI effects were so hammy at first, they kicked viewers out of the movie-going experience entirely. It took years for Pixar’s animations to acquire their trademark charm.

Away is different. In an industry that makes films whose animation credits feature casts of thousands, Zabalodis’s exquisite movie sets a very high bar indeed for a new kind of artisanal filmmaking.

What else you got?

Reading Benjamin Labatut’s When We Cease to Understand the World for the Spectator, 14 November 2020

One day someone is going to have to write the definitive study of Wikipedia’s influence on letters. What, after all, are we supposed to make of all these wikinovels? I mean novels that leap from subject to subject, anecdote to anecdote, so that the reader feels as though they are toppling like Alice down a particularly erudite Wikipedia rabbit-hole.

The trouble with writing such a book, in an age of ready internet access, and particularly Wikipedia, is that, however effortless your erudition, no one is any longer going to be particularly impressed by it.

We can all be our own Don DeLillo now; our own W G Sebald. The model for this kind of literary escapade might not even be literary at all; does anyone here remember James Burke’s Connections, a 1978 BBC TV series which took an interdisciplinary approach to the history of science and invention, and demonstrated how various discoveries, scientific achievements, and historical world events were built from one another successively in an interconnected way?

And did anyone notice how I ripped the last 35 words from the show’s Wikipedia entry?

All right, I’m sneering, and I should make clear from the off that When We Cease… is a chilling, gripping, intelligent, deeply humane book. It’s about the limits of human knowledge, and the not-so-very-pleasant premises on which physical reality seems to be built. The author, a Chilean born in Rotterdam in 1980, writes in Spanish. Adrian Nathan West — himself a cracking essayist — fashioned this spiky, pitch-perfect English translation. The book consists, in the main, of four broadly biographical essays. The chemist Franz Haber finds an industrial means of fixing nitrogen, enabling the revolution in food supply that sustains our world, while also pioneering modern chemical warfare. Karl Schwarzchild, imagines the terrible uber-darkness at the heart of a black hole, dies in a toxic first world war and ushers in a thermonuclear second. Alexander Grothendieck is the first of a line of post-war mathematician-paranoiacs convinced they’ve uncovered a universal principle too terrible to discuss in public (and after Oppenheimer, really, who can blame them?) In the longest essay-cum-story, Erwin Schrodinger and Werner Heisenberg slug it out for dominance in a field — quantum physics — increasingly consumed by uncertainty and (as Labatut would have it) dread.

The problem here — if problem it is — is that no connection, in this book of artfully arranged connections, is more than a keypress away from the internet-savvy reader. Wikipedia, twenty years old next year, really has changed our approach to knowledge. There’s nothing aristocratic about erudition now. It is neither a sign of privilege, nor (and this is more disconcerting) is it necessarily a sign of industry. Erudition has become a register, like irony. like sarcasm. like melancholy. It’s become, not the fruit of reading, but a way of perceiving the world.

Literary attempts to harness this great power are sometimes laughable. But this has always been the case for literary innovation. Look at the gothic novel. Fifty odd years before the peerless masterpiece that is Mary Shelley’s Frankenstein we got Horace Walpole’s The Castle of Otranto, which is jolly silly.

Now, a couple of hundred years after Frankenstein was published, “When We Cease to Understand the World” dutifully repeats the rumours (almost certainly put about by the local tourist industry) that the alchemist Johann Conrad Dippel, born outside Darmstadt in the original Burg Frankenstein in 1673, wielded an uncanny literary influence over our Mary. This is one of several dozen anecdotes which Labatut marshals to drive home that message that There Are Things In This World That We Are Not Supposed to Know. It’s artfully done, and chilling in its conviction. Modish, too, in the way it interlaces fact and fiction.

It’s also laughable, and for a couple of reasons. First, it seems a bit cheap of Labatut to treat all science and mathematics as one thing. If you want to build a book around the idea of humanity’s hubris, you can’t just point your finger at “boffins”.

The other problem is Labatut’s mixing of fact and fiction. He’s not out to cozen us. But here and there this reviewer was disconcerted enough to check his facts — and where else but on Wikipedia? I’m not saying Labatut used Wikipedia. (His bibliography lists a handful of third-tier sources including, I was amused to see, W G Sebald.) Nor am I saying that using Wikipedia is a bad thing.

I think, though, that we’re going to have to abandon our reflexive admiration for erudition. It’s always been desperately easy to fake. (John Fowles.) And today, thanks in large part to Wikipedia, it’s not beyond the wit of most of us to actually *acquire*.

All right, Benjamin, you’re erudite. We get it. What else you got?

A fanciful belonging

Reading The Official History of Britain: Our story in numbers as told by the Office for National Statistics by Boris Starling with David Bradbury for The Telegraph, 18 October 2020

Next year’s national census may be our last. Opinions are being sought as to whether it makes sense, any longer, for the nation to keep taking its own temperature every ten years. Discussions will begin in 2023. Our betters may conclude that the whole rigmarole is outdated, and that its findings can be gleaned more cheaply and efficiently by other methods.

How the UK’s national census was established, what it achieved, and what it will mean if it’s abandoned, is the subject of The Official History of Britain — a grand title for what is, to be honest, a rather messy book, its facts and figures slathered in weak and irrelevant humour, most of it to do with football, I suppose as an intellectual sugar lump for the proles.

Such condescension is archetypally British; and so too is the gimcrack team assembled to write this book. There is something irresistibly Dad’s Army about the image of David Bradbury, an old hand at the Office of National Statistics, comparing dad jokes with novelist Boris Starling, creator of Messiah’s DCI Red Metcalfe, who was played on the telly by Ken Stott.

The charm of the whole enterprise is undeniable. Within these pages you will discover, among other tidbits, the difference between critters and spraggers, whitsters and oliver men. Such were the occupations introduced into the Standard Classification of 1881. (Recent additions include YouTuber and dog sitter.) Nostalgia and melancholy come to the fore when the authors say a fond farewell to John and Margaret — names, deeply unfashionable now, that were pretty much compulsory for babies born between 1914 and 1964. But there’s rigour, too; I recommend the author’s highly illuminating analysis of today’s gender pay gap.

Sometimes the authors show us up for the grumpy tabloid zombies we really are. Apparently a sizeable sample of us, quizzed in 2014, opined that 15 per cent of all our girls under sixteen were pregnant. The lack of mathematical nous here is as disheartening as the misanthropy. The actual figure was a still worryingly high 0.5 per cent, or one in 200 girls. A 10-year Teenage Pregnancy Strategy was created to tackle the problem, and the figure for 2018 — 16.8 conceptions per 1000 women aged between 15 and 17 — is the lowest since records began.

This is why census records are important: they inform enlightened and effective government action. The statistician John Rickman said as much in a paper written in 1796, but his campaign for a national census only really caught on two years later, when the clergyman Thomas Malthus scared the living daylights out of everyone with his “Essay on the Principle of Population”. Three years later, ministers rattled by Malthus’s catalogue of checks on the population of primitive societies — war, pestilence, famine, and the rest — peeked through their fingers at the runaway population numbers for 1801.

The population of England then was the same as the population of Greater London now. The population of Scotland was almost exactly the current population of metropolitan Glasgow.

Better to have called it “The Official History of Britons”. Chapter by chapter, the authors lead us (wisely, if not too well) from Birth, through School, into Work and thence down the maw of its near neighbour, Death, reflecting all the while on what a difference two hundred years have made to the character of each life stage.

The character of government has changed, too. Rickman wanted a census because he and his parliamentary colleagues had almost no useful data on the population they were supposed to serve. The job of the ONS now, the writers point out, “is to try to make sure that policymakers and citizens can know at least as much about their populations and economies as the internet behemoths.”

It’s true: a picture of the state of the nation taken every ten years just doesn’t provide the granularity that could be fetched, more cheaply and more efficiently, from other sources: “smaller surveys, Ordnance Survey data, GP registrations, driving licence details…”

But this too is true: near where I live there is a pedestrian crossing. There is a button I can push, to change the lights, to let me cross the road. I know that in daylight hours, the button is a dummy, that the lights are on a timer, set in some central office, to smooth the traffic flow. Still, I press that button. I like that button. I appreciate having my agency acknowledged, even in a notional, fanciful way.

Next year, 2021, I will tell the census who and what I am. It’s my duty as a citizen, and also my right, to answer how I will. If, in 2031, the state decides it does not need to ask me who I am, then my idea of myself as a citizen, notional as it is, fanciful as it is, will be impoverished.

Langlands & Bell move the furniture

Talking with Ben Langlands and Nikki Bell for the Financial Times, 29 September 2020

Ben Langlands and Nikki Bell have spent forty years making architecturally inspired art: meticulous cardboard models of real buildings hung in hardwood frames, or inset under glass in the seats of sculptural kitchen chairs you cannot sit on.

What’s at the centre of their work? What have they done with its heart?

Langlands and Bell split their time between Whitechapel and their studio in Kent. In person, they are warm and funny and garrulous. But just as their house (called Untitled) is designed to melt unobtrusively into the landscape, so their art has a tendency to vanish into the warp and weft of things. This is literally true of their show “Degrees of Truth”, which opened at Sir John Soane’s Museum in London just days before the Coronavirus lockdown. On 1 October, when the museum reopens, visitors may need a minute or two to adjust to the fiendish way the partnership’s new and old work has hidden itself among the eighteenth-century architect’s eclectic collection of stones, carvings, statues, curios and models.

“Soane liked models; we like models,” Langlands says. “But while Soane used models for understanding how to build something, we use them for understanding how and why it was built.”

The couple’s investigative process invites comparisons with Trevor Paglen (who tilts at surveillance systems) and Forensic Architecture (who rebuild erased moments in history using point-clouds and plaster). But while they are known for tackling big political themes, their preferred environment is the domestic interior. They renovated houses for money when they were students; days before the lockdown they invited me to their place in Kent, a self-built house-studio-gallery they designed themselves. No-one at the time imagined they would soon find themselves marooned there. (“We got a lot done over the summer”, Ben Langlands admits over the phone, “but the virtual and the real are different places. It’s unreasonable to expect that you’ll get real-world inspiration off the internet.”)

They say they are neither architects, nor designers, and in the same breath they say their work is about the relationships people establish with each other through their buildings and their furniture. The first work they ever collaborated on, as students at Middlesex Polytechnic (the former Hornsey College of Art) was a pair of kitchens: through a window let in to the wall of the worn, grimy old one, you got a fleeting glimpse the brand-spanking new one. They believe (rightly) that it is possible to read a world of social relations into, say, the position of chairs around a table.

Not all architectures are material, and this may be why the point of their work sometimes vanishes from sight. Globe Table (2020) is a piece made for the show which now languishes behind the locked doors of Sir John Soane’s Museum. It is a giant white marble laced with black lines, marking the world’s major air routes. Nearby, in a case that once held a pistol that supposedly belonged to Napoleon, Virtual World, Medal of Dishonour (2008) is a disc whose enamel rings combine three categories of codes; codes for airports, like LHR or JFK or LAX, then for NGOs involved in reconstruction or disaster relief like UN WHO or USAID; finally the acronyms of geopolitical players: IRA, CIA, ETA, ISIS.

A bit of a jump, that, from seating arrangements to airline schedules and security agencies. But this is the territory Langlands and Bell have staked out. Since 1990 they have been exploring the space where physical structures, images, logos and acronyms bleed into each other. Their next show, opening at CCA Kitakyushu in Japan on 16 November, is a museological skit built around the signatures of curators they’ve run into over a 45 year career.

They began from a place of high seriousness. Logoworks (1990), modelled the new corporate offices rising in Frankfurt, and they garnered headlines again when they tackled some iconic West Coast companies in Internet Giants: Masters of the Universe (2018) showing how, in Bell’s words, “companies subliminally bring their identity to the forms of their buildings”.

And the seriousness becomes positively deadly in an upcoming show at Gallery 1957 in Accra. “The Past is Never Dead…” brings to the slave forts of the Ghanaian ‘Gold Coast’ the same forensic eye the artists applied to the “campuses” of Apple and Facebook. There’s the same consistency in typology on show, only this time the buildings’ spiny, angular forms are driven, not by brand marketing, but by the need to defend against sea-borne cannon attack.

“I think architecture is changing,” says Nikki Bell, “in that it’s becoming more object-based.” Algorithmic design encourages planners and architects to treat buildings like scaleless objects, like those vector graphics that expand endlessly without loss of resolution. Some of the most ambitious buildings of our age are, architecturally speaking, simply scaled-up logos.

And, says Ben Langlands, there is another, even more powerful force eroding architecture. “Up until now the most profound influences exercised on us culturally have come from architecture,” he says, “so tangible, so enduring, so powerful, so massive, so complicated and expensive, that it has huge effects on us that last for centuries. ” Today, however, those relations are being shaped much more powerfully by social media, “a new kind of architecture which is much more stealthy and hidden.”

Langlands and Bell are committed to studying the world in aesthetic terms, and everything else they might feel or think or say follows from their way of seeing. Once I stop ransacking their work for ideological Easter eggs (are they dystopian? are they anti-capitalist? are they neo-Luddite?), I begin to see what they’re up to. They are looking for beauty, sincere in their conviction that through beauty they will find truth.

Between Logoworks and Internet Giants, and in the shadow of the first Gulf War, the artists began making reliefs, “two-and-a-half-dimensional” wall sculptures that reflected how reconnaissance planes at high altitude saw structures tens of thousands of feet below. “You would get these collapsed views at very compressed angles which now appear very typical of that time.”

Marseille, Cité Radieuse (2001), for example, presents a distorted view of the facade of Le Corbusier’s Unité d’Habitation in Marseille. It’s a model made from a photograph taken at an angle, a meticulous white sculpture that depends almost entirely on the way it’s lit to be at all comprehensible.

Developments in photography and in architecture since the 1990s have replaced those evocative compressed-angle images with drone footage, and this has in its turn informed the tiresome surveillance typology of umpteen videogames — though not before Langlands and Bell earned a Turner Prize nomination for The House of Osama bin Laden. which included a virtual render, explorable via joystick, of a lake-side house bin Laden once occupied.

Forty years into their career, Langlands and Bell continue to chip away at the world with tools that Sir John Soane would have recognised: a sense of form, light, movement and beauty. They say they like new things to investigate. They say they are always travelling, always exploring. But what else can an architecturally minded artist do, once the very idea of architecture has begun to dissolve?

“Langlands & Bell: Degrees of Truth” at Sir John Soane’s Museum, 13 Lincoln’s Inn Fields, London WC2A 3BP, from 1 October 2020.

“Curators Signatures” at CCA Kitakyushu from 16 November 2020 to 22 January 2021.

“’The Past is Never Dead…’ the architecture of the Slave Castles of the Ghanaian Gold Coast”, at Gallery 1957, Accra, Ghana, will open in 2021.

Modernity in Mexico

Reading Connected: How a Mexican village built its own cell phone network by Roberto J González for New Scientist, 14 October 2020

In 2013 the world’s news media, fell in love with Talea, a Mexican pueblo (population 2400) in Rincón, a remote corner of Northern Oaxaca. América Móvil, the telecommunications giant that ostensibly served their area, had refused to provide them with a mobile phone service, so the plucky Taleans had built a network of their own.

Imagine it: a bunch of indigenous maize growers, subsistence farmers with little formal education, besting and embarrassing Carlos Slim, América Móvil’s owner and, according to Forbes magazine at the time, the richest person in the world!

The full story of that short-lived, homegrown network is more complicated, says Roberto González in his fascinating, if somewhat self-conscious account of rural innovation.

Talea was never a backwater. A community that survives Spanish conquest and resists 500 years of interference by centralised government may become many things, but “backward” is not one of them.

On the other hand, Gonzalez harbours no illusions about how communities, however sophisticated, might resist the pall of globalising capital — or why they would even want to. That homogenising whirlwind of technology, finance and bureaucracy also brings with it roads, hospitals, schools, entertainment, jobs, and medicine that actually works.

For every outside opportunity seized, however, an indigenous skill must be forgotten. Talea’s farmers can now export coffee and other cash crops, but many fields lie abandoned, as the town’s youth migrate to the United States. The village still tries to run its own affairs — indeed, the entire Oaxaca region staged an uprising against centralised Mexican authority in 2006. But the movement’s brutal repression by the state augurs ill for the region’s autonomy. And if you’ve no head for history, well, just look around. Pueblos are traditionally made of mud. It’s a much easier, cheaper, more repairable and more ecologically sensitive material than the imported alternatives. Still, almost every new building here is made of concrete.

In 2012, Talea gave its backing to another piece of imported modernity — a do-it-yourself phone network, assembled by Peter Bloom, a US-born rural development specialist, and Erick Huerta, a Mexican telecommunications lawyer. Both considered access to mobile phone networks and the internet to be a human right.

Also helping — and giving the lie to the idea that the network was somehow a homegrown idea — were “Kino”, a hacker who helped indigenous communities evade state controls, and Minerva Cuevas, a Mexican artist best known for hacking supermarket bar codes.

By 2012 Talea’s telephone network was running off an open-source mobile phone network program called OpenBTS (BTS stands for base transceiver station). Mobiles within range of a base station can communicate with each other, and connect globally over the internet using VoIP (or Voice over Internet Protocol). All the network needed was an electrical power socket and an internet connection — utilities Talea had enjoyed for years.

The network never worked very well. Whenever the internet went down, which it did occasionally, the whole town lost its mobile coverage. Recently the phone company Movistar has moved in with an aggressive plan to provide the region with regular (if costly) commercial coverage. Talea’s autonomous network idea lives on, however, in a cooperative organization of community cell phone networks which today represents nearly seventy pueblos across several different regions in Oaxaca.

Connected is an unsentimental account of how a rural community takes control (even if only for a little while) over the very forces that threaten its cultural existence. Talea’s people are dispersing ever more quickly across continents and platforms in search of a better life. The “virtual Taleas” they create on Facebook and other sites to remember their origins are touching, but the fact remains: 50 years of development have done more to unravel a local culture than 500 years of conquest.

Nuanced and terrifying at the same time

Reading The Drone Age by Michael J. Boyle for New Sceintist, 30 September 2020

Machines are only as good as the people who use them. Machines are neutral — just a faster, more efficient way of doing something that we always intended to do. That, anyway, is the argument wielded often by defenders of technology.

Michael Boyle, a professor of political science at LaSalle University in Philadelphia, isn’t buying: “the technology itself structures choices and induces changes in decision-making over time,” he explains, as he concludes his concise, comprehensive overview of the world the drone made. In everything from commerce to warfare, spycraft to disaster relief, our menu of choices “has been altered or constrained by drone technology itself”.

Boyle manages to be nuanced and terrifying at the same time. At one moment he’s pointing out the formidable practical obstacles in the way of anyone launching a major terrorist drone attack. In the next, he’s explaining why political assassinations by drone are just around the corner, Turn a page setting out the moral, operational and legal constraints keenly felt by upstanding US military drone pilots, and you’re confronted by their shadowy handlers in government, who operate with virtually no oversight.

Though grounded in just the right level of technical detail, The Drone Age describes, not so much the machines themselves, but the kind of thinking they’ve ushered in: an approach to problems that no longer distinguishes between peace and war.

In some ways this is a good thing. Assuming that war is inevitable, what’s not to welcome about a style of warfare that involves working through a kill list, rather than exterminating a significant proportion of the enemy’s population?
Well, two things. For US readers, there’s the way a few careful drone strikes proliferated under Obama and especially under Trump into a global counter-insurgency air platform. While for all of us, there’s the peacetime living is affected, too. “It is hard to feel like a human… when reduced to a pixelated dot under the gaze of a drone,” Boyle writes. If the pool of information gathered about us expands, but not the level of understanding or sympathy for us, where then i’s the positive for human society?

Boyle brings proper philosophical thinking to our relationship with technology. He’s particularly indebted to the French philosopher Jacques Ellul, whose The Technological Society (1964) transformed the way we think about machines. Ellul argued that when we apply technology to a problem, we adopt a mode of thinking that emphasizes efficiency and instrumental rationality, but also dehumanizes the problem.
Applying this lesson to drone technology, Boyle writes: “Instead of asking why we are using aircraft for a task in the first place, we tend to debate instead whether the drone is better than the manned alternative.”

This blinkered thinking, on the part of their operators, explains why drone activities almost invariably alienate the very people they are meant to benefit: non-combatants, people caught up in natural disasters, the relatively affluent denizens of major cities. Indeed, the drone’s ability to intimidate seems on balance to outweigh every other capability.

The UN has been known to fly unarmed Falco surveillance drones low to the ground to deter rebel groups from gathering. If you adopt the kind of thinking Ellul described, then this must be a good thing — a means of scattering hostels, achieved efficiently and safely. In reality, there’s no earthly reason to suppose violence has been avoided: only redistributed (and let’s not forget how Al Quaeda, decimated by constant drone strikes, has reinvented itself as a global internet brand).

Boyle warns us at the start that different models of drone vary so substantially “that they hardly look like the same technology”. And yet The Drone Age keeps this heterogenous flock of disruptive technologies together long enough to give it real historical and intellectual coherence. If you read one book about drones, this is the one. But it is just as valuable about surveillance, or the rise of information warfare, or the way the best intentions can turn the world we knew on its head.