Modernity in Mexico

Reading Connected: How a Mexican village built its own cell phone network by Roberto J González for New Scientist, 14 October 2020

In 2013 the world’s news media, fell in love with Talea, a Mexican pueblo (population 2400) in Rincón, a remote corner of Northern Oaxaca. América Móvil, the telecommunications giant that ostensibly served their area, had refused to provide them with a mobile phone service, so the plucky Taleans had built a network of their own.

Imagine it: a bunch of indigenous maize growers, subsistence farmers with little formal education, besting and embarrassing Carlos Slim, América Móvil’s owner and, according to Forbes magazine at the time, the richest person in the world!

The full story of that short-lived, homegrown network is more complicated, says Roberto González in his fascinating, if somewhat self-conscious account of rural innovation.

Talea was never a backwater. A community that survives Spanish conquest and resists 500 years of interference by centralised government may become many things, but “backward” is not one of them.

On the other hand, Gonzalez harbours no illusions about how communities, however sophisticated, might resist the pall of globalising capital — or why they would even want to. That homogenising whirlwind of technology, finance and bureaucracy also brings with it roads, hospitals, schools, entertainment, jobs, and medicine that actually works.

For every outside opportunity seized, however, an indigenous skill must be forgotten. Talea’s farmers can now export coffee and other cash crops, but many fields lie abandoned, as the town’s youth migrate to the United States. The village still tries to run its own affairs — indeed, the entire Oaxaca region staged an uprising against centralised Mexican authority in 2006. But the movement’s brutal repression by the state augurs ill for the region’s autonomy. And if you’ve no head for history, well, just look around. Pueblos are traditionally made of mud. It’s a much easier, cheaper, more repairable and more ecologically sensitive material than the imported alternatives. Still, almost every new building here is made of concrete.

In 2012, Talea gave its backing to another piece of imported modernity — a do-it-yourself phone network, assembled by Peter Bloom, a US-born rural development specialist, and Erick Huerta, a Mexican telecommunications lawyer. Both considered access to mobile phone networks and the internet to be a human right.

Also helping — and giving the lie to the idea that the network was somehow a homegrown idea — were “Kino”, a hacker who helped indigenous communities evade state controls, and Minerva Cuevas, a Mexican artist best known for hacking supermarket bar codes.

By 2012 Talea’s telephone network was running off an open-source mobile phone network program called OpenBTS (BTS stands for base transceiver station). Mobiles within range of a base station can communicate with each other, and connect globally over the internet using VoIP (or Voice over Internet Protocol). All the network needed was an electrical power socket and an internet connection — utilities Talea had enjoyed for years.

The network never worked very well. Whenever the internet went down, which it did occasionally, the whole town lost its mobile coverage. Recently the phone company Movistar has moved in with an aggressive plan to provide the region with regular (if costly) commercial coverage. Talea’s autonomous network idea lives on, however, in a cooperative organization of community cell phone networks which today represents nearly seventy pueblos across several different regions in Oaxaca.

Connected is an unsentimental account of how a rural community takes control (even if only for a little while) over the very forces that threaten its cultural existence. Talea’s people are dispersing ever more quickly across continents and platforms in search of a better life. The “virtual Taleas” they create on Facebook and other sites to remember their origins are touching, but the fact remains: 50 years of development have done more to unravel a local culture than 500 years of conquest.

Nuanced and terrifying at the same time

Reading The Drone Age by Michael J. Boyle for New Sceintist, 30 September 2020

Machines are only as good as the people who use them. Machines are neutral — just a faster, more efficient way of doing something that we always intended to do. That, anyway, is the argument wielded often by defenders of technology.

Michael Boyle, a professor of political science at LaSalle University in Philadelphia, isn’t buying: “the technology itself structures choices and induces changes in decision-making over time,” he explains, as he concludes his concise, comprehensive overview of the world the drone made. In everything from commerce to warfare, spycraft to disaster relief, our menu of choices “has been altered or constrained by drone technology itself”.

Boyle manages to be nuanced and terrifying at the same time. At one moment he’s pointing out the formidable practical obstacles in the way of anyone launching a major terrorist drone attack. In the next, he’s explaining why political assassinations by drone are just around the corner, Turn a page setting out the moral, operational and legal constraints keenly felt by upstanding US military drone pilots, and you’re confronted by their shadowy handlers in government, who operate with virtually no oversight.

Though grounded in just the right level of technical detail, The Drone Age describes, not so much the machines themselves, but the kind of thinking they’ve ushered in: an approach to problems that no longer distinguishes between peace and war.

In some ways this is a good thing. Assuming that war is inevitable, what’s not to welcome about a style of warfare that involves working through a kill list, rather than exterminating a significant proportion of the enemy’s population?
Well, two things. For US readers, there’s the way a few careful drone strikes proliferated under Obama and especially under Trump into a global counter-insurgency air platform. While for all of us, there’s the peacetime living is affected, too. “It is hard to feel like a human… when reduced to a pixelated dot under the gaze of a drone,” Boyle writes. If the pool of information gathered about us expands, but not the level of understanding or sympathy for us, where then i’s the positive for human society?

Boyle brings proper philosophical thinking to our relationship with technology. He’s particularly indebted to the French philosopher Jacques Ellul, whose The Technological Society (1964) transformed the way we think about machines. Ellul argued that when we apply technology to a problem, we adopt a mode of thinking that emphasizes efficiency and instrumental rationality, but also dehumanizes the problem.
Applying this lesson to drone technology, Boyle writes: “Instead of asking why we are using aircraft for a task in the first place, we tend to debate instead whether the drone is better than the manned alternative.”

This blinkered thinking, on the part of their operators, explains why drone activities almost invariably alienate the very people they are meant to benefit: non-combatants, people caught up in natural disasters, the relatively affluent denizens of major cities. Indeed, the drone’s ability to intimidate seems on balance to outweigh every other capability.

The UN has been known to fly unarmed Falco surveillance drones low to the ground to deter rebel groups from gathering. If you adopt the kind of thinking Ellul described, then this must be a good thing — a means of scattering hostels, achieved efficiently and safely. In reality, there’s no earthly reason to suppose violence has been avoided: only redistributed (and let’s not forget how Al Quaeda, decimated by constant drone strikes, has reinvented itself as a global internet brand).

Boyle warns us at the start that different models of drone vary so substantially “that they hardly look like the same technology”. And yet The Drone Age keeps this heterogenous flock of disruptive technologies together long enough to give it real historical and intellectual coherence. If you read one book about drones, this is the one. But it is just as valuable about surveillance, or the rise of information warfare, or the way the best intentions can turn the world we knew on its head.

Thoughts from a Camden bench

Reading The Lonely Century by Noreena Hertz for the Telegraph, 26 September 2020

Economist Noreena Hertz’s new book, about our increasing loneliness in a society geared to mutual exploitation, is explosive stuff. I would guess it was commissioned a while ago, then retrofitted for the Covid-19 pandemic — though I defy you to find any unsightly welding marks. Hertz is too good a writer for that, and her idea too timely, too urgent, and too closely argued to be upstaged by a mere crisis.

Loneliness is bad for our physical and mental health. Lack of social interaction makes us both labile and aggressive, a sucker for any Dionysian movement that offers us even a shred of belonging. These lines of argument precede Covid-19 by years, and have been used to explain everything from changing dating patterns among the young to Donald Trump’s win in 2016. But, goodness, what a highly salted dish they make today, now that we’re consigned to our homes and free association is curbed by law.

Under lockdown, we’re now even more reliant on the internet to maintain our working and personal identities. Here again, our immediate experiences sharpen Hertz’s carefully thought-out arguments. Social media is making us unhappy. It’s eroding our civility. It’s driving up rates of suicide. And so on — the arguments here are well-rehearsed, though Hertz’s synthesis is certainly compelling. “Connecting the world may be their goal,” she writes, thinking of the likes of Instagram and Tik Tok, “but it seems that if in the process connections become shallower, crueller or increasingly distorted, so be it.”

Now that we spend so much time indoors, we’re becoming ever more aware of how our outdoor civic space has been dismantled. And here is Hertz once again, waiting for us outside the shuttered wreck of an abandoned library. Actually libraries are the least of it: these days we don’t even get to exchange a friendly word with the supermarket staff who used to check out our shopping. And heaven help us if, on our way back to house arrest, we try to take the weight off our feet on one of those notorious municipal “Camden benches” — tiered and slanted blocks of concrete designed, not to allow public rest of any sort, but to keep “loiterers” moving.

Having led us, Virgil-like, through the circles of virtual and IRL Hell, Hertz ushers us into the purgatory she calls The Loneliness Economy (the capital letters are hers), and describes some of the ways the private sector has tried to address our cultural atomisation. Consider, for example, the way WeWork’s designers built the corridors and stairways in their co-working spaces deliberately narrow, so that users would have to make eye contact with each other. Such strategies are a bit “nudgy” for my taste, but sooner them than that bloody Camden bench.

The trouble with commercialising solutions for loneliness should be obvious: the lonelier we are, the more money this sector will make. So the underlying social drivers of loneliness will be ignored, and only the symptoms will be addressed. “They want to sell the benefit of working in close proximity with others, but with none of the social buy-in, the *hard work* that community requires.” Hertz writes, and wonders whether the people joining many of new, shiny, commercialised communities “have the time or lifestyles that community-building demands.”

Bringing such material to life necessarily means cherry-picking your examples. An impatient or hostile reader might take exception to the graduate student who spent so long curating her “Jen Goes Ziplining” Instagram post that she never actually went ziplining; also to the well-paid executive who lives out of his car and spends all his money on platonic cuddling services. The world is never short of foolish and broken people, and two swallows do not make a summer.

Still, I can’t see how Hertz’s account is harmed by such vividly rendered first-person research. And readers keen to see Hertz’s working have 130 pages of notes to work from.

More seriously, The Lonely Century suffers from the same limitation as Rutger Bregman’s recent (and excellent) Humankind, about how people are basically good. Let’s give each other a break, says Bregman. Let’s give each other the time of day, says Hertz. But neither author (and it’s not for want of trying) can muster much evidence to suggest the world is turning in the direction they would favour. Bregman bangs on about a school that’s had its funding cut; Hertz about a community cafe that’s closed after running out of money. There are other stories, lots of them, but they are (as Hertz herself concedes at one point) a bit “happy-clappy”.

One of Hertz’s many pre-publication champions calls her book “inspiring”. All it inspired in me, I’m afraid, was terror. The world is full of socially deprogrammed zombies. The Technological Singularity is here, the robots are us, and our open plan office-bound Slack messages constitute, at a global level, some great vegetative Overmind.

Hertz doesn’t go this far. But I do. Subtitled “coming together in a world that’s pulling apart”, The Lonely Century confirmed me in my internal exile, and had me bolting to door to the world even more firmly. I don’t even care that I am the problem Hertz is working so desperately hard to fix. I’m not coming out until it’s all over.

Flame brightly and flame out

Reading Kindred by Rebecca Wragg Sykes for New Scientist, 19 August 2020

How we began to unpick our species’ ancient past in the late 19th century is an astounding story, but not always a pretty one. As well as attaining tremendous insights into the age of Earth and how life evolved, scholars also entertained astonishingly bad ideas about superiority.

Some of these continue today. Why do we assume that Neanderthals, who flourished for 400,000 years, were somehow inferior to Homo sapiens or less fit to survive?

In Kindred, a history of our understanding of Neanderthals, Rebecca Wragg Sykes separates perfectly valid and reasonable questions – for example, “why aren’t Neanderthals around any more?” – from the thinking that casts our ancient relatives as “dullard losers on a withered branch of the family tree”.

An expert in palaeolithic archaeology, with a special interest in the cognitive aspects of stone tool technologies, Sykes provides an fascinating and detailed picture of a field transformed almost out of recognition over the past thirty years. New technologies involve everything from gene sequencing, isotopes and lasers to powerful volumetric algorithms. Well-preserved sites are now not merely dug and brushed: they are scanned and sniffed. High-powered optical microscopes pick out slice and chop marks, electron beams trace the cross-sections of scratches at the nano-scale, and rapid collagen identification techniques determine the type of animal from even tiny bone fragments.

The risk with any new forensic tool is that, in our excitement, we over-interpret the results it throws up. As Sykes wisely says, “A balance must be struck between caution and not ignoring singular artefacts simply because they’re rare or wondrous.”

Many gushing media stories about our ancient ancestor don’t last beyond the first news cycle: though Neanderthals may have performed some funerary activity, they didn’t throw flowers on their loved ones’ graves, or decorate their remains in any way. Other stories, though, continue to accumulate a weight of circumstantial evidence. We’ve known for some years that some Neanderthals actually tanned leather; now it seems they may also have spun thread.

An exciting aspect of this book is the way it refreshes our ideas about our own place in hominin evolution. Rather than congratulating other species when they behave “like us”, Sykes shows that it is much more fruitful to see how human talents may have evolved from behaviours exhibited by other species. Take the example of art. We may ask whether the circular stone assemblies, discovered in a cave near Bruniquel in southern France in 2016, were meant by their Neaderthal creators as monuments? We may wonder, what is the significance of the Neanderthal handprints and ladder designs painted on the walls of three caves in Spain? In both cases, we’d be asking the wrong questions, Sykes says: While undoubtedly striking, Neanderthal art “might not be a massive cognitive leap for hominins who probably already understood the idea of representation.” Animal footprints are effectively symbols already, and even simple tracking “requires an ‘idealised’ form to be kept in mind.”
Captive chimpanzees, given painting materials, enjoy colouring and marking surfaces, though they’re not in the least bit invested in the end result of their labours. So the significance and symbolism of Neanderthal art may simply be that Neanderthals had fun making it.

The Neanderthals of Kindred are not cadet versions of ourselves. They don’t perform “primitive burials”, and they don’t make “proto-art”. They had their own needs, urges, enjoyments, and strategies for survival.

They were not alone and, best of all, they have not quite vanished. Neanderthal nuclear DNA contains glimmers of very ancient encounters between them and other hominin species. Recent research suggests that interbreeding between Neaderthals and Denisovans, and Neanderthals and and Homo sapiens, was effectively the norm. “Modern zoology’s concept of allotaxa may be more appropriate for what Neanderthals were to us,” Sykes explains. Like modern cattle and yaks, we were closely related species that varied in bodies and behaviours, yet could also reproduce.

Neanderthals were never very many. “”At any point in time,” Sykes says, “there may have been fewer Neanderthals walking about than commuters passing each day through Clapham Common, London’s busiest train station.” With dietary demands that took a monstrous toll on their environment, they were destined to flame brightly and flame out. That doesn’t make them less than thus. It simply makes them earlier.

They were part of our family, and though we carry some part of them inside us, we will never see their like again. This, for my money, is Sykes’s finest achievement. Seeing Neanderthals through her eyes, we cannot but mourn their passing.

Know when you’re being played

Calling Bullshit by Jevin D West and Carl T Bergstrom, and Science Fictions by Stuart Ritchie, reviewed for The Telegraph, 8 August 2020

Last week I received a press release headlined “1 in 4 Brits say ‘No’ to Covid vaccine”. This is was such staggeringly bad news, I decided it couldn’t possibly be true. And sure enough, it wasn’t.

Armed with the techniques taught me by biologist Carl Bergstrom and data scientist Jevin West, I “called bullshit” on this unwelcome news, which after all bore all the hallmarks of clickbait.

For a start, the question on which the poll was based was badly phrased. On closer reading it turns out that 25 per cent would decline if the government “made a Covid-19 vaccine available tomorrow”. Frankly, if it was offered *tomorrow* I’d be a refusenik myself. All things being equal, I prefer my medicines tested first.

But what of the real meat of the claim — that daunting figure of “25 per cent”?  It turns out that a sample of 2000 was selected from a sample of 17,000 drawn from the self-selecting community of subscribers to a lottery website. But hush my cynicism: I am assured that the sample of 2000 was “within +/-2% of ONS quotas for Age, Gender, Region, SEG, and 2019 vote, using machine learning”. In other words, some effort has been made to make the sample of 2000 representative of the UK population (but only on five criteria, which is not very impressive. And that whole “+/-2%” business means that up to 40 of the sample weren’t representative of anything).

For this, “machine learning” had to be employed (and, later, “a proprietary machine learning system”)? Well, of course not.  Mention of the miracle that is artificial intelligence is almost always a bit of prestidigitation to veil the poor quality of the original data. And anyway, no amount of “machine learning” can massage away the fact that the sample was too thin to serve the sweeping conclusions drawn from it (“Only 1 in 5 Conservative voters (19.77%) would say No” — it says, to two decimal places, yet!) and is anyway drawn from a non-random population.

Exhausted yet? Then you may well find Calling Bullshit essential reading. Even if you feel you can trudge through verbal bullshit easily enough, this book will give you the tools to swim through numerical snake-oil. And this is important, because numbers easily slip  past the defences we put up against mere words. Bergstrom and West teach a course at the University of Washington from which this book is largely drawn, and hammer this point home in their first lecture: “Words are human constructs,” they say; “Numbers seem to come directly from nature.”

Shake off your naive belief in the truth or naturalness of the numbers quoted in new stories and advertisements, and you’re half way towards knowing when you’re being played.

Say you diligently applied the lessons in Calling Bullshit, and really came to grips with percentages, causality, selection bias and all the rest. You may well discover that you’re now ignoring everything — every bit of health advice, every over-wrought NASA announcement about life on Mars, every economic forecast, every exit poll. Internet pioneer Jaron Lanier reached this point last year when he came up with Ten Arguments for Deleting Your Social Media Accounts Right Now. More recently the best-selling Swiss pundit Rolf Dobelli has ordered us to Stop Reading the News. Both deplore our current economy of attention, which values online engagement over the provision of actual information (as when, for instance, a  review like this one gets headlined “These Two Books About Bad Data Will Break Your Heart”; instead of being told what the piece is about, you’re being sold on the promise of an emotional experience).

Bergstrom and West believe that public education can save us from this torrent of micro-manipulative blither. Their book is a handsome contribution to that effort. We’ve lost Lanier and Dobelli, but maybe the leak can be stopped up. This, essentially, is what the the authors are about; they’re shoring up the Enlightenment ideal of a civic society governed by reason.

Underpinning this ideal is science, and the conviction that the world is assembled on a bedrock of truth fundamentally unassailable truths.

Philosophical nit-picking apart, science undeniably works. But in Science Fictions Stuart Ritchie, a psychologist based at King’s College, shows just how contingent and gimcrack and even shoddy the whole business can get. He has come to praise science, not to bury it; nevertheless, his analyses of science’s current ethical ills — fraud, hype, negligence and so on — are devastating.

The sheer number of problems besetting the scientific endeavour becomes somewhat more manageable once we work out which ills are institutional, which have to do with how scientists communicate, and which are existential problems that are never going away whatever we do.

Our evolved need to express meaning through stories is an existential problem. Without stories, we can do no thinking worth the name, and this means that we are always going to prioritise positive findings over negative ones, and find novelties more charming than rehearsed truths.

Such quirks of the human intellect can be and have been corrected by healthy institutions at least some of the time over the last 400-odd years. But our unruly mental habits run wildly out of control once they are harnessed to a media machine driven by attention.  And the blame for this is not always easily apportioned: “The scenario where an innocent researcher is minding their own business when the media suddenly seizes on one of their findings and blows it out of proportion is not at all the norm,” writes Ritchie.

It’s easy enough to mount a defence of science against the tin-foil-hat brigade, but Ritchie is attempting something much more discomforting: he’s defending science against scientists. Fraudulent and negligent individuals fall under the spotlight occasionally, but institutional flaws are Ritchie’s chief target.

Reading Science Fictions, we see field after field fail to replicate results, correct mistakes, identify the best lines of research, or even begin to recognise talent. In Ritchie’s proffered bag of solutions are desperately needed reforms to the way scientific work is published and cited, and some more controversial ideas about how international mega-collaborations may enable science to catch up on itself and check its own findings effectively (or indeed at all, in the dismal case of economic science).

At best, these books together offer a path back to a civic life based on truth and reason. At worst, they point towards one that’s at least a little bit defended against its own bullshit. Time will tell whether such efforts can genuinely turning the ship around, or are simply here to entertain us with a spot of deckchair juggling. But there’s honest toil here, and a lot of smart thinking with it. Reading both, I was given a fleeting, dizzying reminder of what it once felt like to be a free agent in a factual world.

An intellectual variant of whack-a-mole

Reading Joseph Mazur’s The Clock Mirage for The Spectator, 27 June 2020 

Some books elucidate their subject, mapping and sharpening its boundaries. The Clock Mirage, by the mathematician Joseph Mazur, is not one of them. Mazur is out to muddy time’s waters, dismantling the easy opposition between clock time and mental time, between physics and philosophy, between science and feeling.

That split made little sense even in 1922, when the philosopher Henri Bergson and the young physicist Albert Einstein (much against his better judgment) went head-to-head at the Société française de philosophie in Paris to discuss the meaning of relativity. (Or that was the idea. Actually they talked at complete cross-purposes.)

Einstein won. At the time, there was more novel insight to be got from physics than from psychological introspection. But time passes, knowledge accrues and fashions change. The inference (not Einstein’s, though people associate it with him) that time is a fourth dimension, commensurable with the three dimensions of space, is looking decidedly frayed. Meanwhile Bergson’s psychology of time has been pruned by neurologists and put out new shoots.

Our lives and perceptions are governed, to some extent, by circadian rhythms, but there is no internal clock by which we measure time in the abstract. Instead we construct events, and organise their relations, in space. Drivers, thinking they can make up time with speed, acquire tickets faster than they save seconds. Such errors are mathematically obvious, but spring from the irresistible association we make (poor vulnerable animals that we are) between speed and survival.

The more we understand about non-human minds, the more eccentric and sui generis our own time sense seems to be. Mazur ignores the welter of recent work on other animals’ sense of time — indeed, he winds the clock back several decades in his careless talk of animal ‘instincts’ (no one in animal behaviour uses the ‘I’ word any more). For this, though, I think he can be forgiven. He has put enough on his plate.

Mazur begins by rehearsing how the Earth turns, how clocks were developed, and how the idea of universal clock time came hot on the heels of the railway (mistimed passenger trains kept running into each other). His mind is engaged well enough throughout this long introduction, but around page 47 his heart beats noticeably faster. Mazur’s first love is theory, and he handles it well, using Zeno’s paradoxes to unpack the close relationship between psychology and mathematics.

In Zeno’s famous foot race, by the time fleet-footed Achilles catches up to the place where the plodding tortoise was, the tortoise has moved a little bit ahead. That keeps happening ad infinitum, or at least until Newton (or Leibniz, depending on who you think got to it first) pulls calculus out of his hat. Calculus is an algebraic way of handling (well, fudging) the continuity of the number line. It handles vectors and curves and smooth changes — the sorts of phenomena you can measure only if you’re prepared to stop counting.

But what if reality is granular after all, and time is quantised, arriving in discrete packets like the frames of a celluloid film stuttering through the gate of a projector? In this model of time, calculus is redundant and continuity is merely an illusion. Does it solve Zeno’s paradox? Perhaps it makes it 100 times more intractable. Just as motion needs time, time needs motion, and ‘we might wonder what happens to the existence of the world between those falling bits of time sand’.

This is all beautifully done, and Mazur, having hit his stride, maintains form throughout the rest of the book, though I suspect he has bitten off more than any reader could reasonably want to swallow. Rather than containing and spotlighting his subject, Mazur’s questions about time turn out (time and again, I’m tempted to say) to be about something completely different, as though we were playing an intellectual variant of whack-a-mole.

But this, I suppose, is the point. Mazur quotes Henri Poincaré:

Not only have we not direct intuition of the equality of two periods, but we have not even direct intuition of the simultaneity of two events occurring in two different places.

Our perception of time is so fractured, so much an ad hoc amalgam of the chatter of numerous, separately evolved systems (for the perception of motion; for the perception of daylight; for the perception of risk, and on and on — it’s a very long list), it may in the end be easier to abandon talk of time altogether, and for the same reason that psychologists, talking shop among themselves, eschew vague terms suchas ‘love’.

So much of what we mean by time, as we perceive it day to day, is really rhythm. So much of what physicists mean by time is really space. Time exists, as love exists, as a myth: real because contingent, real because constructed, a catch-all term for phenomena bigger, more numerous and far stranger than we can yet comprehend.

In praise of tiny backpacks

Reading The Bird Way by Jennifer Ackerman for New Scientist, 17 June 2020

Visit the Australian National Botanic Gardens in Canberra, and you may stumble upon an odd sight: a human figure, festooned with futuristic-looking monitoring gear. Is it a statue? No: when children poke it (this happens a lot) the statue blinks.

Meet Jessica McLachlan, a researcher at Australian National University, at work studying the fine, never-before-detected details of bird behaviour. The gear she wears is what it takes to observe the world as birds themselves see it. Not to put too fine a point on it, birds are a lot faster than we are.

Bird brains are miracles of miniaturisation. Their neurons are smaller, more numerous, and more densely packed. They differ architecturally too, cramming more and faster processing power into a smaller space.

This means that in the bird world, things happen fast — sometimes too fast for us to see. (Unless you film blue-capped cordon-bleus at 300 frames-per-second video, how could you possibly know that they tap-dancing in time with their singing?)

Modern recording equipment enables us to study an otherwise invisible world. For example, by strapping tiny backpacks to seabirds, and correlating their flight with known ocean activities, we’ve discovered that birds have a quite extraordinary sense of smell, following krill across apparently featureless horizons that are, for them, “elaborate landscapes of eddying odor plumes.”

Science and nature writer Jennifer Ackerman’s fresh, re-invigorated account of the world of birds is arranged on traditional lines (sections cover her subjects’ singing, work play, love and parenting), but it is informed, and transformed, by accounts of cybernetically enhanced field studies, forensic analyses and ambitious global collaboration. She has travelled far and talked to many, she is generous to a fault with her academic sources, and her descriptions of her visits, field trips and adventures are engaging, but never obtrusive.

Her account centers on the bird life of Australia. This is reasonable: bird song began here. Songbirds, parrots and pigeons evolved here. In Australia, birds fill more ecological niches, are smarter, and longer-lived.

Like every ornithological writer before her, Ackerman is besotted by the sheer variety of her subjects. One of the most endearing passages in her book compares parrots and corvids. Both are intelligent, highly social species, but, after 92 million years of evolutionary separation, the similarities stop there. Ravens are frightened of novelty. Keas lap it up. Keas, confronted by a new researcher, will abandon a task to go play with the stranger. Ravens will simply wig out. Ravens have a strict feeding hierarchy. Curious male Kea will good-naturedly stuff food down an unfamiliar youngster’s gullet to the point where it’s fending them off.

Ackerman’s account is often jaw-dropping, and never more shocking than when she assembles the evidence for the cultural sophistication of bird song. Birds decode far more from sounds than we do and until recently we’ve been deaf to their acoustic complexity. Japanese tits use 11 different notes in their songs, and it’s the combination of notes that encodes information. Swap two notes around, and you elicit different responses. If this isn’t quite syntax, it’s something very like it. The drongo has absolute conscious control over its song, using up to 45 mimicked alarm calls to frighten other species, such as meerkats, into dropping their lunch — and it will target specific warning calls at individuals so they don’t twig what’s going on.

Meanwhile, many different species of bird worldwide, from Australia to Africa to the Himalayas, appear to have developed a universal, and universally comprehensible signal to warn of the approach of brood parasites (cuckoos and the like).

If the twentieth century was the golden age of laboratory study, the 21st is shaping up to become a renaissance for the sorts of field studies Charles Darwin would recognise. Now that cybernetically enhanced researchers like Jessica McLachlan can follow individuals, we have a chance to gauge and understand the intelligence exhibited by the animals around us. Intelligence en masse is, after all, really hard to spot. It doesn’t suit tabulation, or statistical analysis. It doesn’t make itself known from a distance. Intelligent behaviour is unusual. It’s novel. It takes a long time to spot.

If birds are as intelligent as so many of the stories in Ackerman’s eye-opening book suggest, then this, of course, may only be the start of our problems. In Sweden, corvid researchers Mathias and Helena Osvath have befriended a raven who turns up for their experiments, aces them, then flaps off again. “In the ethics section of grant applications,” says Matthias, “it’s difficult to explain…”

Humanity unleashed

Reading Rutger Bregman’s Humankind: A hopeful history for New Scientist, 10 June 2020

In 1651 English philosopher Thomas Hobbes startled the world with Leviathan, an account of the good and evil lurking in human nature. Hobbes argued that people, left to their own devices, were naturally viscious. (Writing in the aftermath of Europe’s cataclysmic Thirty Years War, the evidence was all around.) Ultimately, though, Hobbes’s vision was positive. Humans are also naturally gregarious. Gathering in groups, eventually we become more together than we were apart. We become villages, societies, whole civilisations.

Hobbes’ argument can be taken in two ways. We can glory in what we have built over the course of generations. Or we can live in terror of that future moment when the thin veneer of our civilisation cracks and lets all the devils out.

Even as I was writing this review, I came across the following story. In April, at the height of the surge in coronavirus cases, Americans purchased more guns than at any other point since the FBI began collecting data over 20 years ago. I would contend that these are not stupid people. They are, I suspect, people who have embraced a negatively Hobbesian view of the world, and are expecting the apocalypse.

Belief in innate human badness is self-fulfilling. (Bregman borrows from medical jargon and calls it a nocebo: a negative expectation that make one’s circumstances progressively worse.) And we do seem to try everything in our power to think the worst of ourselves. For instance, we give our schoolchildren William Golding’s 1954 novel Lord of the Flies to read (and fair enough — it’s a very good book) but nobody thinks to mention that the one time boys really were trapped on a desert island for over a year — on a rocky Polynesian atoll without fresh water, in 1965 — they survived, stayed fit and healthy, successfully set a lad’s broken leg, and formed friendships that have lasted a lifetime. Before Bregman came along you couldn’t even find this story on the internet.

From this anecdotal foundation, Bregman assembles his ferocious argument, demolishing one Hobbesian shibboleth after another. Once the settlers of Easter Island had chopped down all the trees on their island (the subject of historian Jared Diamond’s bestelling 2011 book Collapse), their civilisation did not fall apart. It thrived — until European voyagers arrived, bringing diseases and the slave trade. When Catherine Susan Genovese was murdered in New York City on 13 March 1964 (the notorious incident which added the expression “bystander effect” to the psychological lexicon), her neighbours did not watch from out of their windows and do nothing. They called the police. Her neighbour rushed out into the street and held her while she was dying.

Historians and reporters can’t be trusted; neither, alas, can scientists. Philip Zimbardo’s Stanford prison experiment, conducted in August 1971, was supposed to have spun out of control in less than two days, as students playing prison guards set about abusing and torturing fellow students cast in the role of prisoners. Almost everything that has been written about the experiment is not merely exaggerated, it’s wrong. When in 2001 the BBC restaged the experiment, the guards and the prisoners spent the whole time sitting around drinking tea together.

Bregman also discusses the classic “memory” experiment by Stanley Milgram, in which a volunteer is persuaded to electrocute a person nearly to death (an actor, in fact, and in on the wheeze). The problem here is less the experimental design and more the way the experiment was intepreted.

Early accounts took the experiment to mean that people are robots, obeying orders unthinkingly. Subsequent close study of the transcripts shows something rather different: that people are desperate to do the right thing, and their anxiety makes them frighteningly easy to manipulate.

If we’re all desperate to be good people, then we need a new realism when it comes to human nature. We can’t any longer assume that because we are good, those who oppose us must be bad. We must learn to give people a chance. We must learn to stop manipulating people the whole time. From schools to prisons, from police forces to political systems, Bregman visits projects around the world that, by behaving in ways that can seem surreally naive, have resolved conflicts, reformed felons, encouraged excellence and righted whole economies.

This isn’t an argument between left and right, between socialist and conservative; it’s but about what we know about human nature and how we can accommodate a better model of it into our lives. With Humankind Bregman moves from politics, his usual playground, into psychological, even spiritual territory. I am fascinated to know where his journey will lead.

 

Joy in the detail

Reading Charles Darwin’s Barnacle and David Bowie’s Spider by Stephen Heard for the Spectator, 16 May 2020

Heteropoda davidbowie is a species of huntsman spider. Though rare, it has been found in parts of Malaysia, Singapore, Indonesia and possibly Thailand. (The uncertainty arises because it’s often mistaken for a similar-looking species, the Heteropoda javana.) In 2008 a German collector sent photos of his unusual-looking “pet” to Peter Jäger, an arachnologist at the Senckenberg Research Institute in Frankfurt. Consequently, and in common with most other living finds, David Bowie’s spider was discovered twice: once in the field, and once in the collection.

Bowie’s spider is famous, but not exceptional. Jäger has discovered more than 200 species of spider in the last decade, and names them after politicians, comedians and rock stars to highlight our ecological plight. Other researchers find more pointed ways to further the same cause. In the first month of Donald Trump’s administration, Iranian-Canadian entomologist Vazrick Nazari discovered a moth with a head crowned with large, blond comb-over scales. There’s more to Neopalpa donaldtrumpi than a striking physical resemblance: it lives in a federally protected area around where the border wall with Mexico is supposed to go. Cue headlines.

Species are becoming extinct 100 times faster than they did before modern humans arrived. This makes reading a book about the naming of species a curiously queasy affair. Nor is there much comfort to be had in evolutionary ecologist Stephen Heard’s observation that, having described 1.5 million species, we’ve (at very best) only recorded half of what’s out there. There is, you may recall, that devastating passage in Cormac McCarthy’s western novel Blood Meridian in which Judge Holden meticulously records a Native American artifact in his sketchbook — then destroys it. Given that to discover a species you must, by definition, invade its environment, Holden’s sketch-and-burn habit appears to be a painfully accurate metonym for what the human species is doing to the planet. Since the 1970s (when there used to be twice as many wild animals than there are now) we’ve been discovering and endangering new species in almost the same breath.

Richard Spruce, one of the Victorian era’s great botanical explorers, who spent 15 years exploring the Amazon from the Andes to its mouth, is a star of this short, charming book about how we have named and ordered the living world. No detail of his bravery, resilience and grace under pressure come close to the eloquence of this passing quotation, however: “Whenever rains, swollen streams, and grumbling Indians combined to overwhelm me with chagrin,” he wrote in his account of his travels, “I found reason to thank heaven which had enabled me to forget for the moment all my troubles in the contemplation of a simple moss.”

Stephen Heard, an evolutionary ecologist based in Canada, explains how extraordinary amounts of curiosity have been codified to create a map of the living world. The legalistic-sounding codes by which species are named are, it turns out, admirably egalitarian, ensuring that the names amateurs give species are just as valid as those of professional scientists.

Formal names are necessary because of the difficulty we have in distinguishing between similiar species. Common names run into this difficulty all the time. There too many of them, so the same species gets different names in different languages. At the same time, there aren’t enough of them, so that, as Heard points out, “Darwin’s finches aren’t finches, African violets aren’t violets, and electric eels aren’t eels;” Robins, blackbirds and badgers are entirely different animals in Europe and North America; and virtually every flower has at one time or another been called a daisy.

Also names tend, reasonably enough, to be descriptive. This is fine when you’re distinguishing between, say, five different types of fish When there are 500 different fish to sort through, however, absurdity beckons. Heard lovingly transcribes the pre-Linnaean species name of the English whiting, formulated around 1738: “Gadus, dorso tripterygio, ore cirrato, longitudine ad latitudinem tripla, pinna ani prima officulorum trigiata“. So there.

It takes nothing away from the genius of Swedish physician Carl Linnaeus, who formulated the naming system we still use today, to say that he came along at the right time. By Linnaeus’s day, it was possible to look things up. Advances in printing and distribution had made reference works possible. Linnaeus’s innovation was to decouple names from descriptions. And this, as Heard reveals in anecdote after anecdote, is where the fun now slips in: the mythopoeic cool of the baboon Papio anubis, the mischevious smarts of the beetle Agra vation, the nerd celebrity of lemur Avahi cleesi.

Hearst’s taxonomy of taxonomies makes for somewhat thin reading; this is less of a book, more like a dozen interesting magazine articles flying in close formation. But its close focus, bringing to life minutiae of both the living world and the practice of science, is welcome.

I once met Michael Land, the neurobiologist who figured out how the lobster’s eye works. He told me that the trouble with big ideas is that they get in the way of the small ones. Heard’s lesson, delivered with such a light touch, is the same. The joy, and much of the accompanying wisdom, lies in the detail.