Modernity in Mexico

Reading Connected: How a Mexican village built its own cell phone network by Roberto J González for New Scientist, 14 October 2020

In 2013 the world’s news media, fell in love with Talea, a Mexican pueblo (population 2400) in Rincón, a remote corner of Northern Oaxaca. América Móvil, the telecommunications giant that ostensibly served their area, had refused to provide them with a mobile phone service, so the plucky Taleans had built a network of their own.

Imagine it: a bunch of indigenous maize growers, subsistence farmers with little formal education, besting and embarrassing Carlos Slim, América Móvil’s owner and, according to Forbes magazine at the time, the richest person in the world!

The full story of that short-lived, homegrown network is more complicated, says Roberto González in his fascinating, if somewhat self-conscious account of rural innovation.

Talea was never a backwater. A community that survives Spanish conquest and resists 500 years of interference by centralised government may become many things, but “backward” is not one of them.

On the other hand, Gonzalez harbours no illusions about how communities, however sophisticated, might resist the pall of globalising capital — or why they would even want to. That homogenising whirlwind of technology, finance and bureaucracy also brings with it roads, hospitals, schools, entertainment, jobs, and medicine that actually works.

For every outside opportunity seized, however, an indigenous skill must be forgotten. Talea’s farmers can now export coffee and other cash crops, but many fields lie abandoned, as the town’s youth migrate to the United States. The village still tries to run its own affairs — indeed, the entire Oaxaca region staged an uprising against centralised Mexican authority in 2006. But the movement’s brutal repression by the state augurs ill for the region’s autonomy. And if you’ve no head for history, well, just look around. Pueblos are traditionally made of mud. It’s a much easier, cheaper, more repairable and more ecologically sensitive material than the imported alternatives. Still, almost every new building here is made of concrete.

In 2012, Talea gave its backing to another piece of imported modernity — a do-it-yourself phone network, assembled by Peter Bloom, a US-born rural development specialist, and Erick Huerta, a Mexican telecommunications lawyer. Both considered access to mobile phone networks and the internet to be a human right.

Also helping — and giving the lie to the idea that the network was somehow a homegrown idea — were “Kino”, a hacker who helped indigenous communities evade state controls, and Minerva Cuevas, a Mexican artist best known for hacking supermarket bar codes.

By 2012 Talea’s telephone network was running off an open-source mobile phone network program called OpenBTS (BTS stands for base transceiver station). Mobiles within range of a base station can communicate with each other, and connect globally over the internet using VoIP (or Voice over Internet Protocol). All the network needed was an electrical power socket and an internet connection — utilities Talea had enjoyed for years.

The network never worked very well. Whenever the internet went down, which it did occasionally, the whole town lost its mobile coverage. Recently the phone company Movistar has moved in with an aggressive plan to provide the region with regular (if costly) commercial coverage. Talea’s autonomous network idea lives on, however, in a cooperative organization of community cell phone networks which today represents nearly seventy pueblos across several different regions in Oaxaca.

Connected is an unsentimental account of how a rural community takes control (even if only for a little while) over the very forces that threaten its cultural existence. Talea’s people are dispersing ever more quickly across continents and platforms in search of a better life. The “virtual Taleas” they create on Facebook and other sites to remember their origins are touching, but the fact remains: 50 years of development have done more to unravel a local culture than 500 years of conquest.

Nuanced and terrifying at the same time

Reading The Drone Age by Michael J. Boyle for New Sceintist, 30 September 2020

Machines are only as good as the people who use them. Machines are neutral — just a faster, more efficient way of doing something that we always intended to do. That, anyway, is the argument wielded often by defenders of technology.

Michael Boyle, a professor of political science at LaSalle University in Philadelphia, isn’t buying: “the technology itself structures choices and induces changes in decision-making over time,” he explains, as he concludes his concise, comprehensive overview of the world the drone made. In everything from commerce to warfare, spycraft to disaster relief, our menu of choices “has been altered or constrained by drone technology itself”.

Boyle manages to be nuanced and terrifying at the same time. At one moment he’s pointing out the formidable practical obstacles in the way of anyone launching a major terrorist drone attack. In the next, he’s explaining why political assassinations by drone are just around the corner, Turn a page setting out the moral, operational and legal constraints keenly felt by upstanding US military drone pilots, and you’re confronted by their shadowy handlers in government, who operate with virtually no oversight.

Though grounded in just the right level of technical detail, The Drone Age describes, not so much the machines themselves, but the kind of thinking they’ve ushered in: an approach to problems that no longer distinguishes between peace and war.

In some ways this is a good thing. Assuming that war is inevitable, what’s not to welcome about a style of warfare that involves working through a kill list, rather than exterminating a significant proportion of the enemy’s population?
Well, two things. For US readers, there’s the way a few careful drone strikes proliferated under Obama and especially under Trump into a global counter-insurgency air platform. While for all of us, there’s the peacetime living is affected, too. “It is hard to feel like a human… when reduced to a pixelated dot under the gaze of a drone,” Boyle writes. If the pool of information gathered about us expands, but not the level of understanding or sympathy for us, where then i’s the positive for human society?

Boyle brings proper philosophical thinking to our relationship with technology. He’s particularly indebted to the French philosopher Jacques Ellul, whose The Technological Society (1964) transformed the way we think about machines. Ellul argued that when we apply technology to a problem, we adopt a mode of thinking that emphasizes efficiency and instrumental rationality, but also dehumanizes the problem.
Applying this lesson to drone technology, Boyle writes: “Instead of asking why we are using aircraft for a task in the first place, we tend to debate instead whether the drone is better than the manned alternative.”

This blinkered thinking, on the part of their operators, explains why drone activities almost invariably alienate the very people they are meant to benefit: non-combatants, people caught up in natural disasters, the relatively affluent denizens of major cities. Indeed, the drone’s ability to intimidate seems on balance to outweigh every other capability.

The UN has been known to fly unarmed Falco surveillance drones low to the ground to deter rebel groups from gathering. If you adopt the kind of thinking Ellul described, then this must be a good thing — a means of scattering hostels, achieved efficiently and safely. In reality, there’s no earthly reason to suppose violence has been avoided: only redistributed (and let’s not forget how Al Quaeda, decimated by constant drone strikes, has reinvented itself as a global internet brand).

Boyle warns us at the start that different models of drone vary so substantially “that they hardly look like the same technology”. And yet The Drone Age keeps this heterogenous flock of disruptive technologies together long enough to give it real historical and intellectual coherence. If you read one book about drones, this is the one. But it is just as valuable about surveillance, or the rise of information warfare, or the way the best intentions can turn the world we knew on its head.

A private search for extraterrestrial intelligence

Watching John Was Trying to Contact Aliens for New Scientist, 27 August 2020

You have to admire Netflix’s ambition. As well as producing Oscar-winning short documentaries of its own (The White Helmets won in 2017; Period. End of Sentence. won in 2019), the streaming giant makes a regular effort to bring festival-winning factual films to a global audience.

The latest is John Was Trying to Contact Aliens by New York-based UK director Matthew Killip, which won the Jury Award for a non-fiction short film at this year’s Sundance festival in Utah. In little over 15 minutes, it manages to turn the story of John Shepherd, an eccentric inventor who spent 30 years trying to contact extraterrestrials by broadcasting music millions of kilometres into space, into a tear-jerker of epic (indeed, cosmological) proportions.

Never much cared for by his parents, Shepherd was brought up by adoptive grandparents in rural Michigan. A fan of classic science-fiction shows like The Outer Limits and The Twilight Zone, Shepherd never could shake off the impression that a UFO sighting made on him as a child, and in 1972 the 21-year-old set about designing and constructing electronic equipment to launch a private search for extraterrestrial intelligence. His first set-up, built around an ultra-low frequency radio transmitter, soon expanded to fill over 100 square metres of his long-suffering grandparents’ home. It also acquired an acronym: Project STRAT – Special Telemetry Research And Tracking.

A two-storey high, 1000-watt, 60,000-volt, deep-space radio transmitter required a house extension – and all so Shepherd could beam jazz, reggae, Afro-pop and German electronica into the sky for hours every day, in the hope any passing aliens would be intrigued enough to come calling.

It would have been the easiest thing in the world for Killip to play up Shepherd’s eccentricity. Until now, Shepherd has been a folk hero in UFO-hunting circles. His photo portrait, surrounded by bizarre broadcasting kit of his own design, appears in Douglas Curren’s In Advance of the Landing: Folk concepts of outer space – the book TV producer Chris Carter says he raided for the first six episodes of his series The X-Files.

Instead, Killip listens closely to Shepherd, discovers the romance, courage and loneliness of his life, and shapes it into a paean to our ability to out-imagine our circumstances and overreach our abilities. There is something heartbreakingly sad, as well as inspiring, about the way Killip pairs Shepherd’s lonely travails in snow-bound Michigan with footage, assembled by teams of who knows how many hundreds, from the archives of NASA.

Shepherd ran out of money for his project in 1998, and having failed to make a connection with ET, quickly found a life-changing connection much closer to home.

I won’t spoil the moment, but I can’t help but notice that, as a film-maker, Killip likes these sorts of structures. In one of his earlier works, The Lichenologist, about Kerry Knudsen, curator of lichens at the University of California, Riverside, Knudsen spends most of the movie staring at very small things before we are treated to the money shot: Knudsen perched on top of a mountain, whipped by the wind and explaining how his youthful psychedelic experiences inspired a lifetime of intense visual study. It is a shot that changes the meaning of the whole film.

Flame brightly and flame out

Reading Kindred by Rebecca Wragg Sykes for New Scientist, 19 August 2020

How we began to unpick our species’ ancient past in the late 19th century is an astounding story, but not always a pretty one. As well as attaining tremendous insights into the age of Earth and how life evolved, scholars also entertained astonishingly bad ideas about superiority.

Some of these continue today. Why do we assume that Neanderthals, who flourished for 400,000 years, were somehow inferior to Homo sapiens or less fit to survive?

In Kindred, a history of our understanding of Neanderthals, Rebecca Wragg Sykes separates perfectly valid and reasonable questions – for example, “why aren’t Neanderthals around any more?” – from the thinking that casts our ancient relatives as “dullard losers on a withered branch of the family tree”.

An expert in palaeolithic archaeology, with a special interest in the cognitive aspects of stone tool technologies, Sykes provides an fascinating and detailed picture of a field transformed almost out of recognition over the past thirty years. New technologies involve everything from gene sequencing, isotopes and lasers to powerful volumetric algorithms. Well-preserved sites are now not merely dug and brushed: they are scanned and sniffed. High-powered optical microscopes pick out slice and chop marks, electron beams trace the cross-sections of scratches at the nano-scale, and rapid collagen identification techniques determine the type of animal from even tiny bone fragments.

The risk with any new forensic tool is that, in our excitement, we over-interpret the results it throws up. As Sykes wisely says, “A balance must be struck between caution and not ignoring singular artefacts simply because they’re rare or wondrous.”

Many gushing media stories about our ancient ancestor don’t last beyond the first news cycle: though Neanderthals may have performed some funerary activity, they didn’t throw flowers on their loved ones’ graves, or decorate their remains in any way. Other stories, though, continue to accumulate a weight of circumstantial evidence. We’ve known for some years that some Neanderthals actually tanned leather; now it seems they may also have spun thread.

An exciting aspect of this book is the way it refreshes our ideas about our own place in hominin evolution. Rather than congratulating other species when they behave “like us”, Sykes shows that it is much more fruitful to see how human talents may have evolved from behaviours exhibited by other species. Take the example of art. We may ask whether the circular stone assemblies, discovered in a cave near Bruniquel in southern France in 2016, were meant by their Neaderthal creators as monuments? We may wonder, what is the significance of the Neanderthal handprints and ladder designs painted on the walls of three caves in Spain? In both cases, we’d be asking the wrong questions, Sykes says: While undoubtedly striking, Neanderthal art “might not be a massive cognitive leap for hominins who probably already understood the idea of representation.” Animal footprints are effectively symbols already, and even simple tracking “requires an ‘idealised’ form to be kept in mind.”
Captive chimpanzees, given painting materials, enjoy colouring and marking surfaces, though they’re not in the least bit invested in the end result of their labours. So the significance and symbolism of Neanderthal art may simply be that Neanderthals had fun making it.

The Neanderthals of Kindred are not cadet versions of ourselves. They don’t perform “primitive burials”, and they don’t make “proto-art”. They had their own needs, urges, enjoyments, and strategies for survival.

They were not alone and, best of all, they have not quite vanished. Neanderthal nuclear DNA contains glimmers of very ancient encounters between them and other hominin species. Recent research suggests that interbreeding between Neaderthals and Denisovans, and Neanderthals and and Homo sapiens, was effectively the norm. “Modern zoology’s concept of allotaxa may be more appropriate for what Neanderthals were to us,” Sykes explains. Like modern cattle and yaks, we were closely related species that varied in bodies and behaviours, yet could also reproduce.

Neanderthals were never very many. “”At any point in time,” Sykes says, “there may have been fewer Neanderthals walking about than commuters passing each day through Clapham Common, London’s busiest train station.” With dietary demands that took a monstrous toll on their environment, they were destined to flame brightly and flame out. That doesn’t make them less than thus. It simply makes them earlier.

They were part of our family, and though we carry some part of them inside us, we will never see their like again. This, for my money, is Sykes’s finest achievement. Seeing Neanderthals through her eyes, we cannot but mourn their passing.

“Cut the cord!”

Watching Alice Winocour’s film Proxima for New Scientist, 31 July 2020

THE year before Apollo 11’s successful mission to the moon, Robert Altman directed James Caan and Robert Duvall in Countdown. The 1968 film stuck to the technology of its day, pumping up the drama with a somewhat outlandish mission plan: astronaut Lee Stegler and his shelter pod are sent to the moon’s surface on separate flights and Stegler must find the shelter once he lands if he is to survive.

The film played host to characters you might conceivably bump into at the supermarket: the astronauts, engineers and bureaucrats have families and everyday troubles not so very different from your own.

Proxima is Countdown for the 21st century. Sarah Loreau, an astronaut played brilliantly by Eva Green, is given a last-minute opportunity to join a Mars precursor mission to the International Space Station. Loreau’s training and preparation are impressively captured on location at European Space Agency facilities in Cologne, Germany – with a cameo from French astronaut Thomas Pesquet – and in Star City, the complex outside Moscow that is home to the Yuri Gagarin Cosmonaut Training Center. She is ultimately headed to launch from Baikonur in Kazakhstan.

Comparing Proxima with Countdown shows how much both cinema and the space community have changed in the past half-century. There are archaeological traces of action-hero melodramatics in Proxima, but they are the least satisfying parts of the movie. Eva Green is a credible astronaut and a good mother, pushed to extremes on both fronts and painfully aware that she chose this course for herself. She can’t be all things to all people all of the time and, as she learns, there is no such thing as perfect.

Because Proxima is arriving late – its launch was delayed by the covid-19 lockdown – advances in space technology have already somewhat gazzumped Georges Lechaptois’s metliculous location cinematography. I came to the film still reeling from watching the Crew Dragon capsule Endeavour lift off from Kennedy Space Center on 20 May.

That crewed launch was the first of its kind from US soil since NASA’s space shuttle was retired in 2011 and looked, from the comfort of my sofa, about as eventful as a ride in an airport shuttle bus. So it was hard to take seriously those moments in Proxima when taking off from our planet’s surface is made the occasion for an existential crisis. “You’re leaving Earth!” exclaims family psychologist Wendy (Sandra Hüller) at one point, thoroughly earning the look of contempt that Loreau shoots at her.

Proxima‘s end credits include endearing shots of real-life female astronauts with their very young children – which does raise a bit of a problem. The plot largely focuses on the impact of bringing your child to work when you spend half your day in a spacesuit at the bottom of a swimming pool. “Cut the cord!” cries the absurdly chauvinistic NASA astronaut Mike Shannon (Matt Dillon) when Loreau has to go chasing after her young daughter.

Yet here is photographic evidence that suggests Loreau’s real-life counterparts – Yelena Kondakova, Ellen Ochoa, Cady Coleman and Naoko Yamazaki – managed perfectly well on multiple missions without all of Proxima‘s turmoil. Wouldn’t we have been better off seeing the realities they faced rather than watching Loreau, in the film’s final moments, break Baikonur’s safety protocols in order to steal a feel-good, audience-pandering mother-daughter moment?

For half a century, movies have struggled to keep up with the rapidly changing realities of the space sector. Proxima, though interesting and boasting a tremendous central performance from Green, proves to be no more relevant than its forebears.

 

A shockingly dirty idea

Watching UCP/Amblin’s production of Brave New World for New Scientist, 15 July 2020

THE 20th century produced two great British dystopias. The more famous one is 1984, George Orwell’s tale of a world unified into a handful of warring blocs run by dictators.

The other, Brave New World, was written in the space between world wars by the young satirist Aldous Huxley. It had started out as a send-up of H. G. Wells’s utopian works – novels such as Men Like Gods (1923), for instance. Then Huxley visited the US, and what he made of society there – brash, colourful, shallow and self-obsessed – set the engines of his imagination speeding.

The book is Huxley’s idea of what would happen if the 1930s were to run on forever. Embracing peace and order after the bloody chaos of the first world war, people have used technology to radically simplify their society. Humans are born in factories, designed to fit one of five predestined roles. Epsilons, plied with chemical treatments and deprived of oxygen before birth, perform menial functions. Alphas, meanwhile, run the world.

In 1984, everyone is expected to obey the system; in Brave New World, everyone has too much at stake in the system to want to break it. Consumption is pleasurable, addictive and a duty. Want is a thing of the past and abstinence isn’t an option. The family – that eternal thorn in the side of totalitarian states – has been discarded, and with it all intimacy and affection. In fact, no distinct human emotion has escaped this world’s smiley-faced onslaught of “soma” (a recreational drug), consumerism and pornography. There is no jealousy here, no rage, no sadness.

The cracks only show if you aspire to better things. Yearn to be more than you already are, and you won’t get very far. In creating a society without want, the Alphas have made a world without hope.

Huxley’s dystopia has now made it to the small screen. Or the broad strokes have, at least. In the series, Alden Ehrenreich – best known for taking up the mantle of Han Solo in Solo: A Star Wars story – plays John. Labelled a “savage” for living outside the walls of the World State, he encounters the Alpha Bernard Marx (Harry Lloyd) and Lenina Crowne (Jessica Brown Findlay), his Beta pal.

Bernard and Lenina are vacationing in Savage Lands, a theme park modelled a little too closely on Westworld in which people act out the supposedly sinful values of the old order for the entertainment of tourists. It is while they settle into their hotel room at the park that Lenina and Bernard suddenly realise they want to be alone together – a shockingly dirty idea in a world that has outlawed monogamy and marriage – and that “it could be our wedding night”.

“In Huxley’s book, characters were given a hard choice between freedom and happiness”

“We’re savages,” gasps Lenina, as it dawns on the two what they actually want. It is a scene so highly charged and sympathetically played that you only wish the rest of the show had lived up to it. The problem with Brave New World is that it is trying to be Huxley’s future in some scenes and trying to be our future in others. The two do not mix well.

Some of Huxley’s ideas about the future loom over us still. The potential eugenic applications of CRISPR gene editing keep many a medical ethicist awake at night. In other respects, however, Huxley’s dystopia has been superseded by new threats. Artificial intelligence is changing our relationship with expertise, so who needs human Alphas? At the other end of the social scale, Epsilons would struggle to find anything to do in today’s automated factories.

Squeezed by our technology into middle-ranking roles (in Huxley’s book, we would be Betas and Gammas), we aren’t nearly as homogenous and pliable as Huxley imagined we would be. Information technology has facilitated, rather than dampened, our innate tribalism. The difference between the haves and have-nots in our society is infocentric rather than genetic.

In Huxley’s book, the lands left for those deemed savages featured an unreconstructed humanity full of violence and sorrow. Characters were given a hard choice between freedom and happiness. None of that toughness makes it to the screen. At least, not yet.

The TV series is a weirdly weightless offering: a dystopia without lessons for the present day. It is as consumable and addictive as a capsule of soma, but no more nutritious.

In praise of tiny backpacks

Reading The Bird Way by Jennifer Ackerman for New Scientist, 17 June 2020

Visit the Australian National Botanic Gardens in Canberra, and you may stumble upon an odd sight: a human figure, festooned with futuristic-looking monitoring gear. Is it a statue? No: when children poke it (this happens a lot) the statue blinks.

Meet Jessica McLachlan, a researcher at Australian National University, at work studying the fine, never-before-detected details of bird behaviour. The gear she wears is what it takes to observe the world as birds themselves see it. Not to put too fine a point on it, birds are a lot faster than we are.

Bird brains are miracles of miniaturisation. Their neurons are smaller, more numerous, and more densely packed. They differ architecturally too, cramming more and faster processing power into a smaller space.

This means that in the bird world, things happen fast — sometimes too fast for us to see. (Unless you film blue-capped cordon-bleus at 300 frames-per-second video, how could you possibly know that they tap-dancing in time with their singing?)

Modern recording equipment enables us to study an otherwise invisible world. For example, by strapping tiny backpacks to seabirds, and correlating their flight with known ocean activities, we’ve discovered that birds have a quite extraordinary sense of smell, following krill across apparently featureless horizons that are, for them, “elaborate landscapes of eddying odor plumes.”

Science and nature writer Jennifer Ackerman’s fresh, re-invigorated account of the world of birds is arranged on traditional lines (sections cover her subjects’ singing, work play, love and parenting), but it is informed, and transformed, by accounts of cybernetically enhanced field studies, forensic analyses and ambitious global collaboration. She has travelled far and talked to many, she is generous to a fault with her academic sources, and her descriptions of her visits, field trips and adventures are engaging, but never obtrusive.

Her account centers on the bird life of Australia. This is reasonable: bird song began here. Songbirds, parrots and pigeons evolved here. In Australia, birds fill more ecological niches, are smarter, and longer-lived.

Like every ornithological writer before her, Ackerman is besotted by the sheer variety of her subjects. One of the most endearing passages in her book compares parrots and corvids. Both are intelligent, highly social species, but, after 92 million years of evolutionary separation, the similarities stop there. Ravens are frightened of novelty. Keas lap it up. Keas, confronted by a new researcher, will abandon a task to go play with the stranger. Ravens will simply wig out. Ravens have a strict feeding hierarchy. Curious male Kea will good-naturedly stuff food down an unfamiliar youngster’s gullet to the point where it’s fending them off.

Ackerman’s account is often jaw-dropping, and never more shocking than when she assembles the evidence for the cultural sophistication of bird song. Birds decode far more from sounds than we do and until recently we’ve been deaf to their acoustic complexity. Japanese tits use 11 different notes in their songs, and it’s the combination of notes that encodes information. Swap two notes around, and you elicit different responses. If this isn’t quite syntax, it’s something very like it. The drongo has absolute conscious control over its song, using up to 45 mimicked alarm calls to frighten other species, such as meerkats, into dropping their lunch — and it will target specific warning calls at individuals so they don’t twig what’s going on.

Meanwhile, many different species of bird worldwide, from Australia to Africa to the Himalayas, appear to have developed a universal, and universally comprehensible signal to warn of the approach of brood parasites (cuckoos and the like).

If the twentieth century was the golden age of laboratory study, the 21st is shaping up to become a renaissance for the sorts of field studies Charles Darwin would recognise. Now that cybernetically enhanced researchers like Jessica McLachlan can follow individuals, we have a chance to gauge and understand the intelligence exhibited by the animals around us. Intelligence en masse is, after all, really hard to spot. It doesn’t suit tabulation, or statistical analysis. It doesn’t make itself known from a distance. Intelligent behaviour is unusual. It’s novel. It takes a long time to spot.

If birds are as intelligent as so many of the stories in Ackerman’s eye-opening book suggest, then this, of course, may only be the start of our problems. In Sweden, corvid researchers Mathias and Helena Osvath have befriended a raven who turns up for their experiments, aces them, then flaps off again. “In the ethics section of grant applications,” says Matthias, “it’s difficult to explain…”

Humanity unleashed

Reading Rutger Bregman’s Humankind: A hopeful history for New Scientist, 10 June 2020

In 1651 English philosopher Thomas Hobbes startled the world with Leviathan, an account of the good and evil lurking in human nature. Hobbes argued that people, left to their own devices, were naturally viscious. (Writing in the aftermath of Europe’s cataclysmic Thirty Years War, the evidence was all around.) Ultimately, though, Hobbes’s vision was positive. Humans are also naturally gregarious. Gathering in groups, eventually we become more together than we were apart. We become villages, societies, whole civilisations.

Hobbes’ argument can be taken in two ways. We can glory in what we have built over the course of generations. Or we can live in terror of that future moment when the thin veneer of our civilisation cracks and lets all the devils out.

Even as I was writing this review, I came across the following story. In April, at the height of the surge in coronavirus cases, Americans purchased more guns than at any other point since the FBI began collecting data over 20 years ago. I would contend that these are not stupid people. They are, I suspect, people who have embraced a negatively Hobbesian view of the world, and are expecting the apocalypse.

Belief in innate human badness is self-fulfilling. (Bregman borrows from medical jargon and calls it a nocebo: a negative expectation that make one’s circumstances progressively worse.) And we do seem to try everything in our power to think the worst of ourselves. For instance, we give our schoolchildren William Golding’s 1954 novel Lord of the Flies to read (and fair enough — it’s a very good book) but nobody thinks to mention that the one time boys really were trapped on a desert island for over a year — on a rocky Polynesian atoll without fresh water, in 1965 — they survived, stayed fit and healthy, successfully set a lad’s broken leg, and formed friendships that have lasted a lifetime. Before Bregman came along you couldn’t even find this story on the internet.

From this anecdotal foundation, Bregman assembles his ferocious argument, demolishing one Hobbesian shibboleth after another. Once the settlers of Easter Island had chopped down all the trees on their island (the subject of historian Jared Diamond’s bestelling 2011 book Collapse), their civilisation did not fall apart. It thrived — until European voyagers arrived, bringing diseases and the slave trade. When Catherine Susan Genovese was murdered in New York City on 13 March 1964 (the notorious incident which added the expression “bystander effect” to the psychological lexicon), her neighbours did not watch from out of their windows and do nothing. They called the police. Her neighbour rushed out into the street and held her while she was dying.

Historians and reporters can’t be trusted; neither, alas, can scientists. Philip Zimbardo’s Stanford prison experiment, conducted in August 1971, was supposed to have spun out of control in less than two days, as students playing prison guards set about abusing and torturing fellow students cast in the role of prisoners. Almost everything that has been written about the experiment is not merely exaggerated, it’s wrong. When in 2001 the BBC restaged the experiment, the guards and the prisoners spent the whole time sitting around drinking tea together.

Bregman also discusses the classic “memory” experiment by Stanley Milgram, in which a volunteer is persuaded to electrocute a person nearly to death (an actor, in fact, and in on the wheeze). The problem here is less the experimental design and more the way the experiment was intepreted.

Early accounts took the experiment to mean that people are robots, obeying orders unthinkingly. Subsequent close study of the transcripts shows something rather different: that people are desperate to do the right thing, and their anxiety makes them frighteningly easy to manipulate.

If we’re all desperate to be good people, then we need a new realism when it comes to human nature. We can’t any longer assume that because we are good, those who oppose us must be bad. We must learn to give people a chance. We must learn to stop manipulating people the whole time. From schools to prisons, from police forces to political systems, Bregman visits projects around the world that, by behaving in ways that can seem surreally naive, have resolved conflicts, reformed felons, encouraged excellence and righted whole economies.

This isn’t an argument between left and right, between socialist and conservative; it’s but about what we know about human nature and how we can accommodate a better model of it into our lives. With Humankind Bregman moves from politics, his usual playground, into psychological, even spiritual territory. I am fascinated to know where his journey will lead.

 

Cog ergo sum

Reading Matthew Cobb’s The Idea of the Brain for New Scientist 15 April 2020

Ask a passer-by in 2nd-century Rome where consciousness resided — in the heart or in the head — and he was sure to say, in the heart. The surgeon-philosopher Galen of Pergamon had other ideas. During one show he had someone press upon the exposed brain of a pig, which promptly (and mercifully) passed out. Letting go brought the pig back to consciousness.

Is the brain one organ, or many? Are our mental faculties localised in the brain? 1600 years after, Galen a Parisian gentleman tried to blow his brains out with a pistol. Instead he shot away his frontal bone, while leaving the anterior lobes of his brain bare but undamaged. He was rushed to the Hôpital St. Louis, where Ernest Aubertin spent a few vain hours trying to save his life. Aubertin discovered that if he pressed a spatula on the patient’s brain while he was speaking, his speech “was suddenly suspended; a word begun was cut in two. Speech returned as soon as pressure was removed,” Aubertin reported.

Does the brain contain all we are? Eighty years after Aubertin, Montreal neurosurgeon Wilder Penfield was carrying out hundreds of brain operations to relieve chronic temporal-lobe epilepsy. Using delicate electrodes, he would map the safest cuts to make — ones that would not excise vital brain functions. For the patient, the tiniest regions, when stimulated, accessed the strangest experiences. A piano being played. A telephone conversation between two family members. A man and a dog walking along a road. They weren’t memories, so much as dreamlike glimpses of another world.

Cobb’s history of brain science will fascinate readers quite as much as it occasionally horrifies. Cobb, a zoologist by training, has focused for much of his career on the sense of smell and the neurology of the humble fruit fly maggot. The Idea of the Brain sees him coming up for air, taking in the big picture before diving once again into the minutiae of his profession.

He makes a hell of a splash, too, explaining how the analogies we use to describe the brain both enrich our understanding of that mysterious organ, and hamstring our further progress. He shows how mechanical metaphors for brain function lasted well into the era of electricity. And he explains why computational metaphors, though unimaginably more fertile, are now throttling his science.

Study the brain as though it were a machine and in the end (and after much good work) you will run into three kinds of trouble.

First you will find that reverse engineering very complex systems is impossible. In 2017 two neuroscientists, Eric Jonas and Konrad Paul Kording employed the techniques they normally used to analyse the brain to study the Mos 6507 processor — a chip found in computers from the late 1970s and early 1980s that enabled machines to run video games such as Donkey Kong, Space Invaders or Pitfall. Despite their powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works, they admitted that their study fell short of producing “a meaningful understanding”.

Another problem is the way the meanings of technical terms expand over time, warping the way we think about a subject. The French neuroscientist Romain Brette has a particular hatred for that staple of neuroscience, “coding”, an term first invoked by Adrian in the 1920s in a technical sense, in which there is a link between a stimulus and the activity of the neuron. Today almost everybody think of neural codes as representions of that stimulus, which is a real problem, because it implies that there must be an ideal observer or reader within the brain, watching and interpreting those representations. It may be better to think of the brain as constructing information, rather than simply representing it — only we have no idea (yet) how such an organ would function. For sure, it wouldn’t be a computer.

Which brings us neatly to our third and final obstacle to understanding the brain: we take far too much comfort and encouragement from our own metaphors. Do recent advances in AI bring us closer to understanding how our brains work? Cobb’s hollow laughter is all but audible. “My view is that it will probably take fifty years before we understand the maggot brain,” he writes.

One last history lesson. In the 1970s, twenty years after Penfield electrostimulation studies, Michael Gazzaniga, a cognitive neuroscientist at the University of California, Santa Barbara, studied the experiences of people whose brains had been split down the middle in a desperate effort to control their epilepsy. He discovered that each half of the brain was, on its own, sufficient to produce a mind, albeit with slightly different abilities and outlooks in each half. “From one mind, you had two,” Cobb remarks. “Try that with a computer.”

Hearing the news brought veteran psychologist William Estes to despair: “Great,” he snapped, “now we have two things we don’t understand.”

Pollen count

THEY are red, they have stalks that look like eels, and no leaves. But Karl, the boss of the laboratory – played by the unsettling David Wilmot – has his eye on them for the forthcoming flower fair. He tells visiting investors that these genetically engineered creations are “the first mood-lifting, antidepressant, happy plant”.

Ben Whishaw’s character, Chris, smirks: “You’ll love this plant like your own child.”

Chris is in love with Alice, played by Emily Beecham, who is in love with her creations, her “Little Joes”, even to the point of neglecting her own son, Joe.

Owning and caring for a flower that, treated properly, will emit pollen that can induce happiness, would surely be a good thing for these characters. But the plant has been bred to be sterile, and it is determined to propagate itself by any means necessary.

Little Joe is an exercise in brooding paranoia, and it feeds off some of the more colourful fears around the genetic modification of plants.

Kerry Fox plays Bella, whose disappointments and lack of kids seem to put her in the frame of mind to realise what these innocent-looking blooms are up to. “The ability to reproduce is what gives every living thing meaning!” she exclaims. Her colleagues might just be sceptical about this because she is an unhappy presence in the lab, or they may already have fallen under the sway of Little Joe’s psychoactive pollen.

Popular fears around GM – the sort that dominated newspapers and scuppered the industry’s experimental programmes in the mid-1990s – are nearly as old as the science of genetics itself.

At about the turn of the 20th century, agricultural scientists in the US combined inbred lines of maize and found that crop yields were radically increased. Farmers who bought the specially bred seed found that their yields tailed off in subsequent years, so it made sense to buy fresh seed yearly because the profits from bigger crops more than covered the cost of new seeds.

In the 2000s, Monsanto, a multinational agribusiness, added “terminator” genes to the seed it was developing to prevent farmers resowing the product of the previous year’s crop. This didn’t matter to most farmers, but the world’s poorest, who still rely on replanting last year’s seed, were vociferous in their complaints, and a global scandal loomed.

Monsanto chose not, in the end, to commercialise its terminator technologies, but found it had already created a monster: an urban myth of thwarted plant fecundity that provides Jessica Hausner’s Little Joe with its science fictional plot.

What does Little Joe’s pollen do to people? Is it a vegetal telepath, controlling the behaviour of its subjects? Or does it simply make the people who enjoy its scent happier, more sure of themselves, more capable of making healthy life choices? Would that be so terrible? As Karl says, “Who can prove the genuineness of feelings? Moreover, who cares?”

Well, we do, or we should. If, like Karl, we come to believe that the “soul” is nothing more than behaviour, then people could become zombies tomorrow and no one would notice.

Little Joe’s GM paranoia may set some New Scientist readers’ teeth on edge, but this isn’t ultimately, what the movie is about. It is after bigger game: the nature of human freedom.