Cutequake

Reading Irresistible by Joshua Paul Dale for New Scientist, 15 November 2023

The manhole covers outside Joshua Dale’s front door sport colourful portraits of manga characters. Hello Kitty, “now one of the most powerful licensed characters in the world”, appears on road-construction barriers at the end of his road, alongside various cute cartoon frogs, monkeys, ducks, rabbits and dolphins. Dale lives in Tokyo, epicentre of a “cutequake” that has conquered mass media (the Pokémon craze, begun in 1996, has become arguably the highest grossing media franchise of all time) and now encroaches, at pace, upon the wider civic realm. The evidence? Well for a start there are those four-foot-high cutified police-officer mannequins standing outside his local police station…

Do our ideas of and responses to cute have a behavioural or other biological basis? How culturally determined are our definitions of what is and is not cute? Why is the depiction of cute on the rise globally, and why, of all places, did cute originate (as Dale ably demonstrates) in Japan?

Dale makes no bones about his ambition: he wants to found a brand-new discipline: a field of “cute studies”. His efforts are charmingly recorded in this first-person account that tells us a lot (and plenty that is positive) about the workings of modern academia. Dale’s interdisciplinary field will combine studies of domestication and neoteny (the retention of juvenile features in adult animals), embryology, the history of art, the anthropology of advertising and any number of other disparate fields in an effort to explain why we cannot help grinning foolishly at hyper-simplified line drawings of kittens.

Cute appearances are merely heralds of cute behaviour, and it’s this behaviour — friendly, clumsy, open, plastic, inventive, and mischievous — that repays study the most. A species that plays together, adapts together. Play bestows a huge evolutionary advantage on animals that can afford never to grow up.

But there’s the sting: for as long as life is hard and dangerous, animals can’t afford to remain children. Adult bonobos are playful and friendly, but then, bonobos have no natural predators. Their evolutionary cousins the chimpanzees have much tougher lives. You might get a decent game of checkers out of a juvenile chimp, but with the adults it’s an altogether different story.

The first list of cute things (in The Pillow Book), and the first artistic depictions of gambolling puppies and kittens (in the “Scroll of Frolicking Animals”) come from Japan’s Heian period, running from 794 to 1185 – a four-century-long period of peace. So what’s true at an evolutionary scale seems to have a strong analogue in human history, too. In times of peace, cute encourages affiliation.

If I asked you to give me an example of something cut, you’d most likely mention a cub or kitten or other baby animal, but Dale shows that infant care is only the most emotive and powerful social engagement that cute can release. Cute is a social glue of much wider utility. “Cuteness offers another way of relating to the entities around us,” Dale writes; “its power is egalitarian, based on emotion rather than logic and on being friendly rather than authoritarian.”

Is this welcome? I’m not sure. There’s a clear implication here that cute can be readily weaponised — a big-eyed soft-play Trojan Horse, there to emotionally nudge us into heaven knows what groupthunk folly.

Nor, upon finishing the book, did I feel entirely comfortable with an aesthetic that, rather than getting us to take young people seriously, would rather reject the whole notion of maturity.

Dale, a cheerful and able raconteur, had written a cracking story here, straddling history, art, and some complex developmental science, and though he doesn’t say so, he’s more than adequately established that this is, after all, the way the world ends: not with a bang but a “D’awww!”

“Crude to the point of vulgarity, judgmental in the extreme, and bitterly punitive”

Reading The Age of Guilt by Mark Edmundson for New Scientist, 5 July 2023

In his Freudian analysis of what we might loosely term “cancel culture”, Mark Edmundson wisely chooses not to get into facile debates about which of the pioneering psychoanalyst’s ideas have or have not been “proved right”. What would that even mean? Psychology is not so much science as it is engineering, applying ideas and evidence to a purpose. Edmundson, an author and literary scholar, simply wants to suggest that Freud’s ideas might help us better understand our current cultural moment.

In the centre of Freud’s model of the personality sits the ego, the conscious bit of ourselves, the bit that thinks, and therefore is. Bracketing the ego are two components of the personality that are inaccessible to conscious awareness: the id, and the super-ego. The id is the name Freud gives to all those drives that promote immediate individual well-being. Fancy a sandwich? A roll in the hay? A chance to clout your rival? That’s your id talking.

Much later, in an attempt to understand why so many of his clients gave themselves such a hard time (beating themselves up over trivia, calling themselves names, self-harming) Freud conceived the super-ego. This is the bit of us that warns us against misbehaviour, and promotes conformity to social norms. Anyone who’s spent time watching chimpanzees will understand why such machinery might evolve in an animal as ultra-social as Homo sapiens.

Casual descriptions of Freud’s personality model often characterise the super-ego as a sort of wise uncle, paternalistically ushering the cadet ego out of trouble.

But this, Edmundson says, is a big mistake. A power that, in each of us, watches, discovers and criticizes all our intentions, is not a power to be taken lightly.

Edmundson argues that key cultural institutions evolved not just to regulate our appetites; they also provide direction and structure for the super-ego. A priest might raise an eyebrow at your gluttony; but that same priest will relieve you of your self-hatred by offering you a simple atonement: performing it wipes your slate clean. Edmundson wonders what, in the absence of faith, can corral and direct the fulminations of our super-ego — which in this account is not so much a fount of idealism, and more a petulant, unrelenting and potentially life-threatening martinet, “crude to the point of vulgarity, judgmental in the extreme, and bitterly punitive.”

The result of unmet super-ego demands is sickness. “The super-ego punishes the ego and turns it into an anxious, frightened creature, a debilitatingly depressed creature, or both by turns,” Edmundson explains, and quotes a Pew Research study showing that, from 2007 to 2017, the percentage of 12-to-17 year olds who have experienced a major depressive episode in the past year rose from 8 percent to 13 percent. Are these severely depressed teenagers “in some measure victims of the wholesale cultural repudiation of Freud”?

Arguments from intuition need a fairly hefty health warning slapped on them, but I defy you not to find yourself nodding along to more than a few of Edmundson’s philippics: for instance, how the internet became our culture’s chief manifestation of the super-ego, its loudest users bearing all the signs of possession, “immune to irony, void of humour, unforgiving, prone to demand harsh punishments.”

Half a century ago, the anthropologist Ernest Becker wrote a book, The Denial of Death, that hypothesised all manner of connections between society, behaviour and consciousness. Its informed and closely argued speculations inspired a handful of young researchers to test his ideas, and thereby revolutionise the field of experimental psychology. (An excellent book from 2015, The Worm at the Core, tells their story.)

In a culture that’s growing so pathologically judgmental, condemnatory, and punitive, I wonder if The Age of Guilt can perform the same very valuable trick? I do hope so.

On not being a horrible person

Reading The Human Mind by Paul Bloom for New Scientist, 11 May 2023

Inspired, he tells us, by The Origin of the Universe, John Barrow’s 1994 survey of what was then known about cosmology, the Canadian American psychologist Paul Bloom set about writing an introductory tome of his own: a brief yet comprehensive guide to the human mind.

Emulating Barrow’s superb survey has been hard because, as Bloom cheekily points out, “the mysteries of space and time turn out to be easier for our minds to grasp than those of consciousness and choice.”

The first thing to say — though hardly the most interesting — is that Bloom nevertheless succeeds, covering everything from perception and behaviour to language and development; there’s even a small but very worthwhile foray into abnormal psychology. It’s an account that is positive, but never self-serving. Problems in reproducing some key studies, the field’s sometimes scandalous manipulation of statistics, and the once prevailing assumption that undergrad volunteers could accurately represent the diversity of the entire human species, are serious problems, dealt with seriously.

Of course Bloom does more than simply set out the contents of the stall (with the odd rotten apple here and there); he also explores psychology’s evolving values. He recalls his early behaviourist training, in a climate hostile to (then rather woolly) questions about consciousness. “If we were asked to defend our dismissal of consciousness,” he recalls, “we would point out that intelligence does not require sentience.”

Intelligence is no longer the field’s only grail, and consciousness is now front and centre in the science of the mind. This is not only a technical advance; it’s an ethical one. In 1789 Jeremy Bentham asked whether the law could ever refuse its protection to “any sensitive being”, and pointed out that “The question is not, Can [certain beings] reason?, nor Can they talk? but, Can they suffer?”

Suffering requires consciousness, says Bloom; understanding one enables us to tackle the other; so the shift in interest to consciousness itself is a welcome and humanising move.

This strong belief in the humanitarian potential of psychology allows Bloom to defend aspects of his discipline that often discomfort outside observers. He handles issues of environmental and genetic influences on the mind very well, and offers a welcome and robust defence of Alfred Binet’s 1905 invention, the measure of general intelligence or “intelligence quotient”. Bloom shows that the IQ test is as robust a metric as anything in social science. We know that a full half of us score less than 100 on that test; should this knowledge not fill us with humility and compassion? (Actually our responses tend to be more ambiguous. Bloom points out that Nazi commentators hated the idea of IQ because they thought Jews would score better than they would.)

Bloom is concerned to demonstrate that minds do more than think. The privileging of thinking over feeling and intuiting and suffering is a mistake. “A lot depends on what is meant by ‘rational.’ Bloom writes. If you’re stepping outside and it’s raining and you don’t want to get wet, it’s rational to bring an umbrella. But rationality defined in this manner is separate from goodness. “Kidnapping a rich person’s child might be a rational way to achieve the goal of getting a lot of money quickly,” Bloom observes, “so long as you don’t have other goals, such as obeying the law and not being a horrible person.”

Bloom’s ultimate purpose is to explain how a robustly materialistic view of the mind is fully compatible with the existence of choice and morality and responsibility. This middle-of-the-road approach may disappoint intellectual storm-chasers, but the rest of us can can be assured of an up-to-the-minute snapshot of the field, full of unknowns and uncertainties, yes, and speculations, and controversies — but guided by an ever-more rounded idea of what it is to be human.

A balloon bursts

Watching The Directors: five short films by Marcus Coates, for New Scientist, 31 August 2022

In a flat on the fifth floor of Chaucer House, a post-war social housing block in London’s Pimlico, artist Marcus Coates is being variously nudged, bullied and shocked out of his sense of what is real.

Controlling the process is Lucy, a teenager in recovery from psychosis. Through Coates’s earpiece, she prompt Coates in how to behave, when to sit and when to stand, what to touch, and what to avoid, what to look at, what to think about, what to feel. Sometimes Coates asks for guidance, but more often than not Lucy’s reply is drowned out by a second voice, chilling, over-loud, warning the artist not to ask so many questions.

A cardboard cut-out figure appears at the foot of Coates’s bed — a clown girl with bleeding feet. It’s a life-size blow-up of a sketch Coates himself was instructed to draw a moment before. Through his earpiece a balloon bursts, shockingly loud, nearly knocking him to the ground.

Commissioned and produced by the arts development company Artangel, The Directors is a series of five short films, each directed by someone in recovery from psychosis. In each film, the director guides Coates as he recreates, as best he can, specific aspects and recollections of their experience. These are not rehearsed performances; Coates receives instructions in real-time through an ear-piece. (That this evokes, with some precision the auditory hallucinations of psychosis, is a coincidence lost on no one.)

So: some questions. In the course of each tricky, disorientating and sometimes very frightening film, does Marcus Coates at any point experience psychosis? And does it matter?

Attempts to imagine our way into the experiences of other beings, human or non-human, have for a long while fallen under the shadow of an essay written in 1974 by American philosopher Thomas Nagel. “What Is It Like to Be a Bat?” wasn’t about bats so much as about the continuity of consciousness. I can imagine what it would be like for me to be a bat. But, says Nagel, that’s not the same as knowing what’s it’s like for a bat to be a bat.

Nagel’s lesson in gloomy solipsism is all very well in philosophy. Applied to natural history, though — where even a vague notion of what a bat feels like might help a naturalist towards a moment of insight — it merely sticks the perfect in the way of the good.

Coates’s work consistently champions the vexed, imperfect, utterly necessary business of imagining our way into other heads, human and non-human. 2013’s Dawn Chorus revealed common ground between human and bird vocalisation. He slowed recordings of bird song down twenty-fold, had people learn these slowed-down songs, filmed them in performance, then sped these films up twenty times. The result is a charming but very startling glimpse of what humans might look and sound like brought up to “bird speed”.

Three years before in 2010 The Trip, a collaboration with St. John’s Hospice in London, Coates enacted the unfulfilled dream of an anthropologist, Alex H. Journeying to the Amazon, he followed very precise instructions so that the dying man could conduct, by a sort of remote control, his unrealised last field trip.

The Directors is a work in that spirit. Inspired by a 2017 residency at the Maudsley psychiatric hospital in London, Coates effort to embody and express the breadth and complexity of psychotic experience is in part a learning experience. The project’s extensive advisory group includes Isabel Valli, a neuroscientist at King’s College London with a particular expertise in psychosis.

In the end, though, Coates is thrown back on his own resources, having to imagine his way into a condition which, in Lucy’s experience, robbed her of any certainty in the perceived world, leaving her emotions free to spiral into mistrust, fear and horror.

Lucy’s film is being screened in the tiny bedroom where her film was shot. The other films are screened in different nearby locations, including one in the Churchill Gardens Estate’s thirty-seater cinema. This film, arguably the most claustrophobic and frightening of the lot, finds Coates drenched in ice-water and toasted by electric bar heaters in an attempt to simulate the overwhelming tactile hallucinations that psychosis can trigger.

Asked by the producers at ArtAngel whether he had found the exercise in any way exploitative the director of this film, Marcus Gordon, replied: “Well, there’s no doubt I’ve exploited the artist.”

Dreams of a fresh crab supper

Reading David Peña-Guzmán’s When Animals Dream for New Scientist, 17 August 2022

Heidi the octopus is dreaming. As she sleeps, her skin changes from smooth and white to flashing yellow and orange, to deepest purple, to a series of light greys and yellows, criss-crossed by ridges and spiky horns. Heidi’s human carer David Scheel has seen this pattern before in waking octopuses: Heidi, he says, is dreaming of catching and eating a crab.

The story of Heidi’s dream, screened in 2019 in the documentary “Octopuses: Making Contact”, provides the starting point for When Animals Dream, an exploration of non-human imaginations by David Pena-Guzman, a philosopher at San Francisco State University.

The Roman philosopher-poet Lucretius thought animals dreamt. So did Charles Darwin. The idea only lost its respectability for about a century, roughly between 1880 to 1980, when the reflex was king and behaviourism ruled the psychology laboratory.

In the classical conditioning developed by Ivan Pavlov, it is possible to argue that your trained salivation to the sound of a bell is “just a reflex”. But later studies in this mould never really banished the interior, imaginative lives of animals. These later studies relied on a different kind of conditioning, called “operant conditioning”, in which you behave in a certain way before you receive a reward or avoid a punishment. The experimenter can claim all they want that the trained rat is “conditioned”; still, that rat running through its maze is acting for all the world as though it expects something.

In fact, there’s no “as though” about it. Pena-Guzman, in a book rich in laboratory and experimental detail, describes how rats, during their exploration of a maze, will dream up imaginary mazes, and imaginary rewards — all as revealed by distinctive activity in their hippocampuses.

Clinical proofs that animals have imaginations are intriguing enough, but what really dragged the study of animal dreaming back into the light was our better understanding of how humans dream.

From the 1950s to the 1970s we were constantly being assured that our dreams were mere random activity in the pons (the part of the brainstem that connects the medulla to the midbrain). But we’ve since learned that dreaming involves many more brain areas, including the parietal lobes (involved in the representation of physical spaces) and frontal lobes (responsible among other things for emotional regulation).

At this point, the sight of a dog dreaming of chasing a ball became altogether too provocative to discount. The dog’s movements while dreaming mirror its waking behaviours too closely for us to say that they lack any significance.

Which animals dream? Pena-Guzman’s list is too long to quote in its entirety. There are mice, dogs and platypuses, beluga whales and ostriches, penguins, chameleons and iguanas, cuttlefish and octopuses — “the jury is still out on crocodiles and turtles.”

The brain structures of these animals may be nothing like our own; nonetheless, studies of sleeping brains throw up startling commonalities, suggesting, perhaps, that dreaming is a talent to which many different branches of the evolutionary tree have converged.

Pena-Guzman poses big questions. When did dreaming first emerge and why? By what paths did it find its way into so many branches of the evolutionary tree? And — surely the biggest question of all — what are we do with this information?

Pena-Guzman says dreams are morally significant “because they reveal animals to be both carriers and sources of moral value, which is to say, beings who matter and for whom things matter.”

In short, dreams imply the existence of a self. And whether or not that self can think rationally, act voluntarily, or produce linguistic reports, just like a human, is neither here nor there. The fact is, animals that dream “have a phenomenally charged experience of the world… they sense, feel and perceive.”

Starting from the unlikely-sounding assertion that Heidi the octopus dreams of fresh crab suppers, Pena-Guzman assembles a short, powerful, closely argued and hugely well evidenced case for animal personhood. This book will change minds.

 

 

Some rude remarks about Aberdeen

Reading Sarah Chaney’s Am I Normal? for new Scientist, 10 August 2022

In the collections of University College London there is a pair of gloves belonging to the nineteenth-century polymath Francis Galton. Galton’s motto was “Whenever you can, count”. The left glove has a pin in the thumb and a pad of felt across the fingers. Placing a strip of paper over the felt, Galton could then, by touching different fingers with the pin, keep track of what he saw without anyone noticing. A beautiful female, passing him by, was registered on one finger: her plain companion was registered on another. With these tallies, Galton thought he might in time be able to assemble a beauty map of Great Britain. The project foundered, though not before Galton had committed to paper some rude remarks about Aberdeen.

Galton’s beauty map is easy to throw rocks at. Had he completed it, it would have been not so much a map of British physiognomic variation, as a record of his own tastes, prejudices and shifting predilections during a long journey.

But as Sarah Chaney’s book makes clear, when it comes to the human body, the human mind, and human society, there can be no such thing as an altogether objective study. There is no moral or existential “outside” from which to begin such a study. The effort to gain such a perspective is worthwhile, but the best studies will always need reinterpreting for new audiences and next generations.

Am I Normal? gives often very uncomfortable social and political context to the historical effort to identify norms of human physiology, behaviour and social interaction. Study after study is shown to be hopelessly tied to its historical moment. (The less said about “drapetomiania”, the putative mental illness discovered among runaway slaves, the better.)

And it would be the easiest job in the world, and the cheapest, to wield these horrors as blunt weapons to tear down both medicine and the social sciences. It is true that in some areas, measurement has elicited surprisingly little insight — witness the relative lack of progress made in the last century in the field of mental health. But while conditions like schizophrenia are real, and ruinous, do we really want to give up our effort at understanding?

It is certainly true, that we have paid not nearly enough attention, at least until recently, to where our data was coming from. Research has to begin somewhere, of course, but should we really still be basing so much of our medicine, our social policy and even our design decisions on data drawn (and sometimes a very long time ago) from people in Western, educated, industrialised, rich and democratic (WEIRD) societies?

Chaney shows how studies that sought human norms can just as easily detect diversity. All it needs is a little humility, a little imagination, and an underlying awareness that in these fields, the truth does not stay still.

How to live an extra life

Reading Sidarta Ribeiro’s The Oracle of Night: The History and Science of Dreams for the Times, 2 January 2022

Early in January 1995 Sidarta Ribeiro, a Brazilian student of neuroscience, arrived in New York City to study for his doctorate at Rockefeller University. He rushed enthusiastically into his first meeting — only to discover he could not understand a word people were saying. He had, in that minute, completely forgotten the English language.

It did not return. He would turn up for work, struggle to make sense of what was going on, and wake up, hours later, on his supervisor’s couch. The colder and snowier the season became, the more impossible life got until, “when February came around, in the deep silence of the snow, I gave in completely and was swallowed up into the world of Morpheus.”

Ribeiro struggled into lectures so he didn’t get kicked out; otherwise he spent the entire winter in bed, sleeping; dozing; above all, dreaming.

April brought a sudden and extraordinary recovery. Ribeiro woke up understanding English again, and found he could speak it more fluently than ever before. He befriended colleagues easily, drove research, and, in time, announced the first molecular evidence of Freud’s “day residue” hypothesis, in which dreams exist to process memories of the previous day.

Ribeiro’s rich dream life that winter convinced him that it was the dreams themselves — and not just the napping — that had wrought a cognitive transformation in him. Yet dreams, it turned out, had fallen almost entirely off the scientific radar.

The last dream researcher to enter public consciousness was probably Sigmund Freud. Freud at least seemed to draw coherent meaning from dreams — dreams that had been focused to a fine point by fin de siecle Vienna’s intense milieu of sexual repression.

But Freud’s “royal road to the unconscious” has been eroded since by a revolution in our style of living. Our great-grandparents could remember a world without artificial light. Now we play on our phones until bedtime, then get up early, already focused on a day that is, when push comes to shove, more or less identical to yesterday. We neither plan our days before we sleep, nor do we interrogate our dreams when we wake. It it any wonder, then, that our dreams are no longer able to inspire us? When US philosopher Owen Flanagan says that “dreams are the spandrels of sleep”, he speaks for almost all of us.

Ribeiro’s distillation of his life’s work offers a fascinating corrective to this reductionist view. His experiments have made Freudian dream analysis and other elements of psychoanalytic theory definitively testable for the first time — and the results are astonishing. There is material evidence, now, for the connection Freud made between dreaming and desire: both involve the selective release of the brain chemical dopamine.

The middle chapters of The Oracle of Night focus on the neuroscience, capturing, with rare candour, all the frustrations, controversies, alliances, ambiguities and accidents that make up a working scientists’ life.

To study dreams, Ribeiro explains, is to study memories: how they are received in the hippocampus, then migrate out through surrounding cortical tissue, “burrowing further and further in as life goes on, ever more extensive and resistant to disturbances”. This is why some memories can survive, even for more than a hundred years, in a brain radically altered by the years.

Ribeiro is an excellent communicator of detail, and this is important, given the size and significance of his claims. “At their best,” he writes, “dreams are the actual source of our future. The unconscious is the sum of all our memories and of all their possible combinations. It comprises, therefore, much more than what we have been — it comprises all that we can be.”

To make such a large statement stick, Ribeiro is going to need more than laboratory evidence, and so his scientific account is generously bookended with well-evidenced anthropological and archaeological speculation. Dinosaurs enjoyed REM sleep, apparently — a delightfully fiendish piece of deduction. And was the Bronze Age Collapse, around 1200 BC, triggered by a qualitative shift how we interpreted dreams?

These are sizeable bread slices around an already generous Christmas-lunch sandwich. On page 114, when Ribeiro declares that “determining a point of departure for sleep requires that we go back 4.5 billion years and imagine the conditions in which the first self-replicating molecules appeared,” the poor reader’s heart may quail and their courage falter.

A more serious obstacle — and one quite out of Ribeiro’s control — is that friend (we all have one) who, feet up on the couch and both hands wrapped around the tea, baffs on about what their dreams are telling them. How do you talk about a phenomenon that’s become the sinecure of people one would happily emigrate to avoid?

And yet, by taking dreams seriously, Bibeiro must also talk seriously about shamanism, oracles, prediction and mysticism. This is only reasonable, if you think about it: dreams were the source of shamanism (one of humanity’s first social specialisations), and shamanism in its turn gave us medicine, philosophy and religion.

When lives were socially simple and threats immediate, the relevance of dreams was not just apparent; it was impelling. Even a stopped watch is correct twice a day. With a limited palette of dream materials to draw from, was it really so surprising that Rome’s first emperor Augustus found his rise to power predicted by dreams — at least according to his biographer Suetonius? “By simulating objects of desire and aversion,” Ribeiro argues, “the dream occasionally came to represent what would in fact happen”.

Growing social complexity enriches dream life, but it also fragments it (which may explain all those complaints that the gods have fallen silent, which we find in texts dated between 1200 to 800 BC). The dreams typical of our time, says Ribeiro, are “a blend of meanings, a kaleidoscope of wants, fragmented by the multiplicity of desires of our age”.

The trouble with a book of this size and scale is that the reader, feeling somewhat punch-drunk, can’t help but wish that two or three better books had been spun from the same material. Why naps are good for us, why sleep improves our creativity, how we handle grief — these are instrumentalist concerns that might, under separate covers, have greatly entertained us. In the end, though, I reckon Ribeiro made the right choice. Such books give us narrow, discrete glimpses into the power of dreams, but leave us ignorant of their real nature. Ribeiro’s brick of a book shatters our complacency entirely, and for good.

Dreaming is a kind of thinking. Treating dreams as spandrels — as so much psychic “junk code” — is not only culturally illiterate — it runs against everything current science is telling us. You are a dreaming animal, says Ribeiro, for whom “dreams are like stars: they are always there, but we can only see them at night”.

Keep a dream diary, Ribeiro insists. So I did. And as I write this, a fortnight on, I am living an extra life.

“A perfect storm of cognitive degradation”

Reading Johann Hari’s Stolen Focus: Why you can’t pay attention for the Telegraph, 2 January 2022

Drop a frog into boiling water, and it will leap from the pot. Drop it into tepid water, brought slowly to the boil, and the frog will happily let itself be cooked to death.

Just because this story is nonsense, doesn’t mean it’s not true — true of people, I mean, and their tendency to acquiesce to poorer conditions, just so long as these conditions are introduced slowly enough. (Remind yourself of this next time you check out your own groceries at the supermarket.)

Stolen Focus is about how our environment is set up to fracture our attention. It starts with our inability to set the notifications correctly on our mobile phones, and ends with climate change. Johann Hari thinks a huge number of pressing problems are fundamentally related, and that the human mind is on the receiving end of what amounts to a denial-of-service attack. One of Hari’s many interviewees is Earl Miller from MIT, who talks about “a perfect storm of cognitive degradation, as a result of distraction”; to which Hari adds the following, devastating gloss: “We are becoming less rational less intelligent, less focused.”

To make such a large argument stick, though, Hari must ape the wicked problem he’s addressing: he must bring the reader to a slow boil.

Stolen Focus begins with an extended grumble about how we don’t read as many books as we used to, or buy as many newspapers, and how we are becoming increasingly enslaved to our digital devices. Why we should listen to Hari in particular, admittedly a latecomer to the “smartphones bad, books good” campaign, is not immediately apparent. His account of his own months-long digital detox — idly beachcombing the shores of Provincetown at the northern tip of Cape Cod, War and Peace tucked snugly into his satchel — is positively maddening.

What keeps the reader engaged are the hints (very well justified, it turns out) that Hari is deliberately winding us up.

He knows perfectly well that most of us have more or less lost the right to silence and privacy — that there will be no Cape Cod for you and me, in our financial precarity.

He also knows, from bitter experience, that digital detoxes don’t work. He presents himself as hardly less of a workaholic news-freak than he was before taking off to Massachusetts.

The first half of Stolen Focus got me to sort out my phone’s notification centre, and that’s not nothing; but it is, in the greater scheme of Hari’s project, hardly more than a parody of the by now very familiar “digital diet book” — the sort of book that, as Hari eventually points out, can no more address the problems filling this book than a diet book can address epidemic obesity.

Many of the things we need to do to recover our attention and focus “are so obvious they are banal,” Hari writes: “slow down, do one thing at a time, sleep more… Why can’t we do the obvious things that would improve our attention? What forces are stopping us?”

So, having had his fun with us, Hari begins to sketch in the high sides of the pot in which he finds us being coddled.

The whole of the digital economy is powered by breaks in our attention. The finest minds in the digital business are being paid to create ever-more-addicting experiences. According to former Google engineer Tristan Harris, “we shape more than eleven billion interruptions to people’s lives every day.” Aza Raskin, co-founder of the Center for Humane Technology, calls the big tech companies “the biggest perpetrators of non-mindfulness in the world.”

Social media is particularly insidious, promoting outrage among its users because outrage is wildly more addictive than real news. Social media also promotes loneliness. Why? Because lonely people will self-medicate with still more social media. (That’s why Facebook never tells you which of your friends are nearby and up for a coffee: Facebook can’t make money from that.)

We respond to the anger and fear a digital diet instils with hypervigilance, which wrecks our attention even further and damages our memory to boot. If we have children, we’ll keep them trapped at home “for their own safety”, though our outdoor spaces are safer than they have ever been. And when that carceral upbringing shatters our children’s attention (as it surely will), we stuff them with drugs, treating what is essentially an environmental problem. And on and on.

And on. The problem is not that Stolen Focus is unfocused, but that it is relentless: an unfeasibly well-supported undergraduate rant that swells — as the hands of the clock above the bar turn round and the beers slide down — to encompass virtually every ill on the planet, from rubbish parenting to climate change.

“If the ozone layer was threatened today,” writes Hari, “the scientists warning about it would find themselves being shouted down by bigoted viral stories claiming the threat was all invented by the billionaire George Soros, or that there’s no such thing as the ozone layer anyway, or that the holes were really being made by Jewish space lasers.”

The public campaign Hari wants Stolen Focus to kick-start (there’s an appendix; there’s a weblink; there’s a newsletter) involves, among other things, a citizen’s wage, outdoor play, limits on light pollution, public ownership of social media, changes in the food supply, and a four-day week. I find it hard to disagree with any of it, but at the same time I can’t rid myself of the image of how, spiritually refreshed by War and Peace, consumed in just a few sittings in a Provincetown coffee shop, Hari must (to quote Stephen Leacock) have “flung himself from the room, flung himself upon his horse and rode madly off in all directions”.

If you read just one book about how the modern world is driving us crazy, read this one. But why would you read just one?

Know when you’re being played

Calling Bullshit by Jevin D West and Carl T Bergstrom, and Science Fictions by Stuart Ritchie, reviewed for The Telegraph, 8 August 2020

Last week I received a press release headlined “1 in 4 Brits say ‘No’ to Covid vaccine”. This is was such staggeringly bad news, I decided it couldn’t possibly be true. And sure enough, it wasn’t.

Armed with the techniques taught me by biologist Carl Bergstrom and data scientist Jevin West, I “called bullshit” on this unwelcome news, which after all bore all the hallmarks of clickbait.

For a start, the question on which the poll was based was badly phrased. On closer reading it turns out that 25 per cent would decline if the government “made a Covid-19 vaccine available tomorrow”. Frankly, if it was offered *tomorrow* I’d be a refusenik myself. All things being equal, I prefer my medicines tested first.

But what of the real meat of the claim — that daunting figure of “25 per cent”?  It turns out that a sample of 2000 was selected from a sample of 17,000 drawn from the self-selecting community of subscribers to a lottery website. But hush my cynicism: I am assured that the sample of 2000 was “within +/-2% of ONS quotas for Age, Gender, Region, SEG, and 2019 vote, using machine learning”. In other words, some effort has been made to make the sample of 2000 representative of the UK population (but only on five criteria, which is not very impressive. And that whole “+/-2%” business means that up to 40 of the sample weren’t representative of anything).

For this, “machine learning” had to be employed (and, later, “a proprietary machine learning system”)? Well, of course not.  Mention of the miracle that is artificial intelligence is almost always a bit of prestidigitation to veil the poor quality of the original data. And anyway, no amount of “machine learning” can massage away the fact that the sample was too thin to serve the sweeping conclusions drawn from it (“Only 1 in 5 Conservative voters (19.77%) would say No” — it says, to two decimal places, yet!) and is anyway drawn from a non-random population.

Exhausted yet? Then you may well find Calling Bullshit essential reading. Even if you feel you can trudge through verbal bullshit easily enough, this book will give you the tools to swim through numerical snake-oil. And this is important, because numbers easily slip  past the defences we put up against mere words. Bergstrom and West teach a course at the University of Washington from which this book is largely drawn, and hammer this point home in their first lecture: “Words are human constructs,” they say; “Numbers seem to come directly from nature.”

Shake off your naive belief in the truth or naturalness of the numbers quoted in new stories and advertisements, and you’re half way towards knowing when you’re being played.

Say you diligently applied the lessons in Calling Bullshit, and really came to grips with percentages, causality, selection bias and all the rest. You may well discover that you’re now ignoring everything — every bit of health advice, every over-wrought NASA announcement about life on Mars, every economic forecast, every exit poll. Internet pioneer Jaron Lanier reached this point last year when he came up with Ten Arguments for Deleting Your Social Media Accounts Right Now. More recently the best-selling Swiss pundit Rolf Dobelli has ordered us to Stop Reading the News. Both deplore our current economy of attention, which values online engagement over the provision of actual information (as when, for instance, a  review like this one gets headlined “These Two Books About Bad Data Will Break Your Heart”; instead of being told what the piece is about, you’re being sold on the promise of an emotional experience).

Bergstrom and West believe that public education can save us from this torrent of micro-manipulative blither. Their book is a handsome contribution to that effort. We’ve lost Lanier and Dobelli, but maybe the leak can be stopped up. This, essentially, is what the the authors are about; they’re shoring up the Enlightenment ideal of a civic society governed by reason.

Underpinning this ideal is science, and the conviction that the world is assembled on a bedrock of truth fundamentally unassailable truths.

Philosophical nit-picking apart, science undeniably works. But in Science Fictions Stuart Ritchie, a psychologist based at King’s College, shows just how contingent and gimcrack and even shoddy the whole business can get. He has come to praise science, not to bury it; nevertheless, his analyses of science’s current ethical ills — fraud, hype, negligence and so on — are devastating.

The sheer number of problems besetting the scientific endeavour becomes somewhat more manageable once we work out which ills are institutional, which have to do with how scientists communicate, and which are existential problems that are never going away whatever we do.

Our evolved need to express meaning through stories is an existential problem. Without stories, we can do no thinking worth the name, and this means that we are always going to prioritise positive findings over negative ones, and find novelties more charming than rehearsed truths.

Such quirks of the human intellect can be and have been corrected by healthy institutions at least some of the time over the last 400-odd years. But our unruly mental habits run wildly out of control once they are harnessed to a media machine driven by attention.  And the blame for this is not always easily apportioned: “The scenario where an innocent researcher is minding their own business when the media suddenly seizes on one of their findings and blows it out of proportion is not at all the norm,” writes Ritchie.

It’s easy enough to mount a defence of science against the tin-foil-hat brigade, but Ritchie is attempting something much more discomforting: he’s defending science against scientists. Fraudulent and negligent individuals fall under the spotlight occasionally, but institutional flaws are Ritchie’s chief target.

Reading Science Fictions, we see field after field fail to replicate results, correct mistakes, identify the best lines of research, or even begin to recognise talent. In Ritchie’s proffered bag of solutions are desperately needed reforms to the way scientific work is published and cited, and some more controversial ideas about how international mega-collaborations may enable science to catch up on itself and check its own findings effectively (or indeed at all, in the dismal case of economic science).

At best, these books together offer a path back to a civic life based on truth and reason. At worst, they point towards one that’s at least a little bit defended against its own bullshit. Time will tell whether such efforts can genuinely turning the ship around, or are simply here to entertain us with a spot of deckchair juggling. But there’s honest toil here, and a lot of smart thinking with it. Reading both, I was given a fleeting, dizzying reminder of what it once felt like to be a free agent in a factual world.

Pig-philosophy

Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.