On not being a horrible person

Reading The Human Mind by Paul Bloom for New Scientist, 11 May 2023

Inspired, he tells us, by The Origin of the Universe, John Barrow’s 1994 survey of what was then known about cosmology, the Canadian American psychologist Paul Bloom set about writing an introductory tome of his own: a brief yet comprehensive guide to the human mind.

Emulating Barrow’s superb survey has been hard because, as Bloom cheekily points out, “the mysteries of space and time turn out to be easier for our minds to grasp than those of consciousness and choice.”

The first thing to say — though hardly the most interesting — is that Bloom nevertheless succeeds, covering everything from perception and behaviour to language and development; there’s even a small but very worthwhile foray into abnormal psychology. It’s an account that is positive, but never self-serving. Problems in reproducing some key studies, the field’s sometimes scandalous manipulation of statistics, and the once prevailing assumption that undergrad volunteers could accurately represent the diversity of the entire human species, are serious problems, dealt with seriously.

Of course Bloom does more than simply set out the contents of the stall (with the odd rotten apple here and there); he also explores psychology’s evolving values. He recalls his early behaviourist training, in a climate hostile to (then rather woolly) questions about consciousness. “If we were asked to defend our dismissal of consciousness,” he recalls, “we would point out that intelligence does not require sentience.”

Intelligence is no longer the field’s only grail, and consciousness is now front and centre in the science of the mind. This is not only a technical advance; it’s an ethical one. In 1789 Jeremy Bentham asked whether the law could ever refuse its protection to “any sensitive being”, and pointed out that “The question is not, Can [certain beings] reason?, nor Can they talk? but, Can they suffer?”

Suffering requires consciousness, says Bloom; understanding one enables us to tackle the other; so the shift in interest to consciousness itself is a welcome and humanising move.

This strong belief in the humanitarian potential of psychology allows Bloom to defend aspects of his discipline that often discomfort outside observers. He handles issues of environmental and genetic influences on the mind very well, and offers a welcome and robust defence of Alfred Binet’s 1905 invention, the measure of general intelligence or “intelligence quotient”. Bloom shows that the IQ test is as robust a metric as anything in social science. We know that a full half of us score less than 100 on that test; should this knowledge not fill us with humility and compassion? (Actually our responses tend to be more ambiguous. Bloom points out that Nazi commentators hated the idea of IQ because they thought Jews would score better than they would.)

Bloom is concerned to demonstrate that minds do more than think. The privileging of thinking over feeling and intuiting and suffering is a mistake. “A lot depends on what is meant by ‘rational.’ Bloom writes. If you’re stepping outside and it’s raining and you don’t want to get wet, it’s rational to bring an umbrella. But rationality defined in this manner is separate from goodness. “Kidnapping a rich person’s child might be a rational way to achieve the goal of getting a lot of money quickly,” Bloom observes, “so long as you don’t have other goals, such as obeying the law and not being a horrible person.”

Bloom’s ultimate purpose is to explain how a robustly materialistic view of the mind is fully compatible with the existence of choice and morality and responsibility. This middle-of-the-road approach may disappoint intellectual storm-chasers, but the rest of us can can be assured of an up-to-the-minute snapshot of the field, full of unknowns and uncertainties, yes, and speculations, and controversies — but guided by an ever-more rounded idea of what it is to be human.

A balloon bursts

Watching The Directors: five short films by Marcus Coates, for New Scientist, 31 August 2022

In a flat on the fifth floor of Chaucer House, a post-war social housing block in London’s Pimlico, artist Marcus Coates is being variously nudged, bullied and shocked out of his sense of what is real.

Controlling the process is Lucy, a teenager in recovery from psychosis. Through Coates’s earpiece, she prompt Coates in how to behave, when to sit and when to stand, what to touch, and what to avoid, what to look at, what to think about, what to feel. Sometimes Coates asks for guidance, but more often than not Lucy’s reply is drowned out by a second voice, chilling, over-loud, warning the artist not to ask so many questions.

A cardboard cut-out figure appears at the foot of Coates’s bed — a clown girl with bleeding feet. It’s a life-size blow-up of a sketch Coates himself was instructed to draw a moment before. Through his earpiece a balloon bursts, shockingly loud, nearly knocking him to the ground.

Commissioned and produced by the arts development company Artangel, The Directors is a series of five short films, each directed by someone in recovery from psychosis. In each film, the director guides Coates as he recreates, as best he can, specific aspects and recollections of their experience. These are not rehearsed performances; Coates receives instructions in real-time through an ear-piece. (That this evokes, with some precision the auditory hallucinations of psychosis, is a coincidence lost on no one.)

So: some questions. In the course of each tricky, disorientating and sometimes very frightening film, does Marcus Coates at any point experience psychosis? And does it matter?

Attempts to imagine our way into the experiences of other beings, human or non-human, have for a long while fallen under the shadow of an essay written in 1974 by American philosopher Thomas Nagel. “What Is It Like to Be a Bat?” wasn’t about bats so much as about the continuity of consciousness. I can imagine what it would be like for me to be a bat. But, says Nagel, that’s not the same as knowing what’s it’s like for a bat to be a bat.

Nagel’s lesson in gloomy solipsism is all very well in philosophy. Applied to natural history, though — where even a vague notion of what a bat feels like might help a naturalist towards a moment of insight — it merely sticks the perfect in the way of the good.

Coates’s work consistently champions the vexed, imperfect, utterly necessary business of imagining our way into other heads, human and non-human. 2013’s Dawn Chorus revealed common ground between human and bird vocalisation. He slowed recordings of bird song down twenty-fold, had people learn these slowed-down songs, filmed them in performance, then sped these films up twenty times. The result is a charming but very startling glimpse of what humans might look and sound like brought up to “bird speed”.

Three years before in 2010 The Trip, a collaboration with St. John’s Hospice in London, Coates enacted the unfulfilled dream of an anthropologist, Alex H. Journeying to the Amazon, he followed very precise instructions so that the dying man could conduct, by a sort of remote control, his unrealised last field trip.

The Directors is a work in that spirit. Inspired by a 2017 residency at the Maudsley psychiatric hospital in London, Coates effort to embody and express the breadth and complexity of psychotic experience is in part a learning experience. The project’s extensive advisory group includes Isabel Valli, a neuroscientist at King’s College London with a particular expertise in psychosis.

In the end, though, Coates is thrown back on his own resources, having to imagine his way into a condition which, in Lucy’s experience, robbed her of any certainty in the perceived world, leaving her emotions free to spiral into mistrust, fear and horror.

Lucy’s film is being screened in the tiny bedroom where her film was shot. The other films are screened in different nearby locations, including one in the Churchill Gardens Estate’s thirty-seater cinema. This film, arguably the most claustrophobic and frightening of the lot, finds Coates drenched in ice-water and toasted by electric bar heaters in an attempt to simulate the overwhelming tactile hallucinations that psychosis can trigger.

Asked by the producers at ArtAngel whether he had found the exercise in any way exploitative the director of this film, Marcus Gordon, replied: “Well, there’s no doubt I’ve exploited the artist.”

Dreams of a fresh crab supper

Reading David Peña-Guzmán’s When Animals Dream for New Scientist, 17 August 2022

Heidi the octopus is dreaming. As she sleeps, her skin changes from smooth and white to flashing yellow and orange, to deepest purple, to a series of light greys and yellows, criss-crossed by ridges and spiky horns. Heidi’s human carer David Scheel has seen this pattern before in waking octopuses: Heidi, he says, is dreaming of catching and eating a crab.

The story of Heidi’s dream, screened in 2019 in the documentary “Octopuses: Making Contact”, provides the starting point for When Animals Dream, an exploration of non-human imaginations by David Pena-Guzman, a philosopher at San Francisco State University.

The Roman philosopher-poet Lucretius thought animals dreamt. So did Charles Darwin. The idea only lost its respectability for about a century, roughly between 1880 to 1980, when the reflex was king and behaviourism ruled the psychology laboratory.

In the classical conditioning developed by Ivan Pavlov, it is possible to argue that your trained salivation to the sound of a bell is “just a reflex”. But later studies in this mould never really banished the interior, imaginative lives of animals. These later studies relied on a different kind of conditioning, called “operant conditioning”, in which you behave in a certain way before you receive a reward or avoid a punishment. The experimenter can claim all they want that the trained rat is “conditioned”; still, that rat running through its maze is acting for all the world as though it expects something.

In fact, there’s no “as though” about it. Pena-Guzman, in a book rich in laboratory and experimental detail, describes how rats, during their exploration of a maze, will dream up imaginary mazes, and imaginary rewards — all as revealed by distinctive activity in their hippocampuses.

Clinical proofs that animals have imaginations are intriguing enough, but what really dragged the study of animal dreaming back into the light was our better understanding of how humans dream.

From the 1950s to the 1970s we were constantly being assured that our dreams were mere random activity in the pons (the part of the brainstem that connects the medulla to the midbrain). But we’ve since learned that dreaming involves many more brain areas, including the parietal lobes (involved in the representation of physical spaces) and frontal lobes (responsible among other things for emotional regulation).

At this point, the sight of a dog dreaming of chasing a ball became altogether too provocative to discount. The dog’s movements while dreaming mirror its waking behaviours too closely for us to say that they lack any significance.

Which animals dream? Pena-Guzman’s list is too long to quote in its entirety. There are mice, dogs and platypuses, beluga whales and ostriches, penguins, chameleons and iguanas, cuttlefish and octopuses — “the jury is still out on crocodiles and turtles.”

The brain structures of these animals may be nothing like our own; nonetheless, studies of sleeping brains throw up startling commonalities, suggesting, perhaps, that dreaming is a talent to which many different branches of the evolutionary tree have converged.

Pena-Guzman poses big questions. When did dreaming first emerge and why? By what paths did it find its way into so many branches of the evolutionary tree? And — surely the biggest question of all — what are we do with this information?

Pena-Guzman says dreams are morally significant “because they reveal animals to be both carriers and sources of moral value, which is to say, beings who matter and for whom things matter.”

In short, dreams imply the existence of a self. And whether or not that self can think rationally, act voluntarily, or produce linguistic reports, just like a human, is neither here nor there. The fact is, animals that dream “have a phenomenally charged experience of the world… they sense, feel and perceive.”

Starting from the unlikely-sounding assertion that Heidi the octopus dreams of fresh crab suppers, Pena-Guzman assembles a short, powerful, closely argued and hugely well evidenced case for animal personhood. This book will change minds.

 

 

Some rude remarks about Aberdeen

Reading Sarah Chaney’s Am I Normal? for new Scientist, 10 August 2022

In the collections of University College London there is a pair of gloves belonging to the nineteenth-century polymath Francis Galton. Galton’s motto was “Whenever you can, count”. The left glove has a pin in the thumb and a pad of felt across the fingers. Placing a strip of paper over the felt, Galton could then, by touching different fingers with the pin, keep track of what he saw without anyone noticing. A beautiful female, passing him by, was registered on one finger: her plain companion was registered on another. With these tallies, Galton thought he might in time be able to assemble a beauty map of Great Britain. The project foundered, though not before Galton had committed to paper some rude remarks about Aberdeen.

Galton’s beauty map is easy to throw rocks at. Had he completed it, it would have been not so much a map of British physiognomic variation, as a record of his own tastes, prejudices and shifting predilections during a long journey.

But as Sarah Chaney’s book makes clear, when it comes to the human body, the human mind, and human society, there can be no such thing as an altogether objective study. There is no moral or existential “outside” from which to begin such a study. The effort to gain such a perspective is worthwhile, but the best studies will always need reinterpreting for new audiences and next generations.

Am I Normal? gives often very uncomfortable social and political context to the historical effort to identify norms of human physiology, behaviour and social interaction. Study after study is shown to be hopelessly tied to its historical moment. (The less said about “drapetomiania”, the putative mental illness discovered among runaway slaves, the better.)

And it would be the easiest job in the world, and the cheapest, to wield these horrors as blunt weapons to tear down both medicine and the social sciences. It is true that in some areas, measurement has elicited surprisingly little insight — witness the relative lack of progress made in the last century in the field of mental health. But while conditions like schizophrenia are real, and ruinous, do we really want to give up our effort at understanding?

It is certainly true, that we have paid not nearly enough attention, at least until recently, to where our data was coming from. Research has to begin somewhere, of course, but should we really still be basing so much of our medicine, our social policy and even our design decisions on data drawn (and sometimes a very long time ago) from people in Western, educated, industrialised, rich and democratic (WEIRD) societies?

Chaney shows how studies that sought human norms can just as easily detect diversity. All it needs is a little humility, a little imagination, and an underlying awareness that in these fields, the truth does not stay still.

How to live an extra life

Reading Sidarta Ribeiro’s The Oracle of Night: The History and Science of Dreams for the Times, 2 January 2022

Early in January 1995 Sidarta Ribeiro, a Brazilian student of neuroscience, arrived in New York City to study for his doctorate at Rockefeller University. He rushed enthusiastically into his first meeting — only to discover he could not understand a word people were saying. He had, in that minute, completely forgotten the English language.

It did not return. He would turn up for work, struggle to make sense of what was going on, and wake up, hours later, on his supervisor’s couch. The colder and snowier the season became, the more impossible life got until, “when February came around, in the deep silence of the snow, I gave in completely and was swallowed up into the world of Morpheus.”

Ribeiro struggled into lectures so he didn’t get kicked out; otherwise he spent the entire winter in bed, sleeping; dozing; above all, dreaming.

April brought a sudden and extraordinary recovery. Ribeiro woke up understanding English again, and found he could speak it more fluently than ever before. He befriended colleagues easily, drove research, and, in time, announced the first molecular evidence of Freud’s “day residue” hypothesis, in which dreams exist to process memories of the previous day.

Ribeiro’s rich dream life that winter convinced him that it was the dreams themselves — and not just the napping — that had wrought a cognitive transformation in him. Yet dreams, it turned out, had fallen almost entirely off the scientific radar.

The last dream researcher to enter public consciousness was probably Sigmund Freud. Freud at least seemed to draw coherent meaning from dreams — dreams that had been focused to a fine point by fin de siecle Vienna’s intense milieu of sexual repression.

But Freud’s “royal road to the unconscious” has been eroded since by a revolution in our style of living. Our great-grandparents could remember a world without artificial light. Now we play on our phones until bedtime, then get up early, already focused on a day that is, when push comes to shove, more or less identical to yesterday. We neither plan our days before we sleep, nor do we interrogate our dreams when we wake. It it any wonder, then, that our dreams are no longer able to inspire us? When US philosopher Owen Flanagan says that “dreams are the spandrels of sleep”, he speaks for almost all of us.

Ribeiro’s distillation of his life’s work offers a fascinating corrective to this reductionist view. His experiments have made Freudian dream analysis and other elements of psychoanalytic theory definitively testable for the first time — and the results are astonishing. There is material evidence, now, for the connection Freud made between dreaming and desire: both involve the selective release of the brain chemical dopamine.

The middle chapters of The Oracle of Night focus on the neuroscience, capturing, with rare candour, all the frustrations, controversies, alliances, ambiguities and accidents that make up a working scientists’ life.

To study dreams, Ribeiro explains, is to study memories: how they are received in the hippocampus, then migrate out through surrounding cortical tissue, “burrowing further and further in as life goes on, ever more extensive and resistant to disturbances”. This is why some memories can survive, even for more than a hundred years, in a brain radically altered by the years.

Ribeiro is an excellent communicator of detail, and this is important, given the size and significance of his claims. “At their best,” he writes, “dreams are the actual source of our future. The unconscious is the sum of all our memories and of all their possible combinations. It comprises, therefore, much more than what we have been — it comprises all that we can be.”

To make such a large statement stick, Ribeiro is going to need more than laboratory evidence, and so his scientific account is generously bookended with well-evidenced anthropological and archaeological speculation. Dinosaurs enjoyed REM sleep, apparently — a delightfully fiendish piece of deduction. And was the Bronze Age Collapse, around 1200 BC, triggered by a qualitative shift how we interpreted dreams?

These are sizeable bread slices around an already generous Christmas-lunch sandwich. On page 114, when Ribeiro declares that “determining a point of departure for sleep requires that we go back 4.5 billion years and imagine the conditions in which the first self-replicating molecules appeared,” the poor reader’s heart may quail and their courage falter.

A more serious obstacle — and one quite out of Ribeiro’s control — is that friend (we all have one) who, feet up on the couch and both hands wrapped around the tea, baffs on about what their dreams are telling them. How do you talk about a phenomenon that’s become the sinecure of people one would happily emigrate to avoid?

And yet, by taking dreams seriously, Bibeiro must also talk seriously about shamanism, oracles, prediction and mysticism. This is only reasonable, if you think about it: dreams were the source of shamanism (one of humanity’s first social specialisations), and shamanism in its turn gave us medicine, philosophy and religion.

When lives were socially simple and threats immediate, the relevance of dreams was not just apparent; it was impelling. Even a stopped watch is correct twice a day. With a limited palette of dream materials to draw from, was it really so surprising that Rome’s first emperor Augustus found his rise to power predicted by dreams — at least according to his biographer Suetonius? “By simulating objects of desire and aversion,” Ribeiro argues, “the dream occasionally came to represent what would in fact happen”.

Growing social complexity enriches dream life, but it also fragments it (which may explain all those complaints that the gods have fallen silent, which we find in texts dated between 1200 to 800 BC). The dreams typical of our time, says Ribeiro, are “a blend of meanings, a kaleidoscope of wants, fragmented by the multiplicity of desires of our age”.

The trouble with a book of this size and scale is that the reader, feeling somewhat punch-drunk, can’t help but wish that two or three better books had been spun from the same material. Why naps are good for us, why sleep improves our creativity, how we handle grief — these are instrumentalist concerns that might, under separate covers, have greatly entertained us. In the end, though, I reckon Ribeiro made the right choice. Such books give us narrow, discrete glimpses into the power of dreams, but leave us ignorant of their real nature. Ribeiro’s brick of a book shatters our complacency entirely, and for good.

Dreaming is a kind of thinking. Treating dreams as spandrels — as so much psychic “junk code” — is not only culturally illiterate — it runs against everything current science is telling us. You are a dreaming animal, says Ribeiro, for whom “dreams are like stars: they are always there, but we can only see them at night”.

Keep a dream diary, Ribeiro insists. So I did. And as I write this, a fortnight on, I am living an extra life.

“A perfect storm of cognitive degradation”

Reading Johann Hari’s Stolen Focus: Why you can’t pay attention for the Telegraph, 2 January 2022

Drop a frog into boiling water, and it will leap from the pot. Drop it into tepid water, brought slowly to the boil, and the frog will happily let itself be cooked to death.

Just because this story is nonsense, doesn’t mean it’s not true — true of people, I mean, and their tendency to acquiesce to poorer conditions, just so long as these conditions are introduced slowly enough. (Remind yourself of this next time you check out your own groceries at the supermarket.)

Stolen Focus is about how our environment is set up to fracture our attention. It starts with our inability to set the notifications correctly on our mobile phones, and ends with climate change. Johann Hari thinks a huge number of pressing problems are fundamentally related, and that the human mind is on the receiving end of what amounts to a denial-of-service attack. One of Hari’s many interviewees is Earl Miller from MIT, who talks about “a perfect storm of cognitive degradation, as a result of distraction”; to which Hari adds the following, devastating gloss: “We are becoming less rational less intelligent, less focused.”

To make such a large argument stick, though, Hari must ape the wicked problem he’s addressing: he must bring the reader to a slow boil.

Stolen Focus begins with an extended grumble about how we don’t read as many books as we used to, or buy as many newspapers, and how we are becoming increasingly enslaved to our digital devices. Why we should listen to Hari in particular, admittedly a latecomer to the “smartphones bad, books good” campaign, is not immediately apparent. His account of his own months-long digital detox — idly beachcombing the shores of Provincetown at the northern tip of Cape Cod, War and Peace tucked snugly into his satchel — is positively maddening.

What keeps the reader engaged are the hints (very well justified, it turns out) that Hari is deliberately winding us up.

He knows perfectly well that most of us have more or less lost the right to silence and privacy — that there will be no Cape Cod for you and me, in our financial precarity.

He also knows, from bitter experience, that digital detoxes don’t work. He presents himself as hardly less of a workaholic news-freak than he was before taking off to Massachusetts.

The first half of Stolen Focus got me to sort out my phone’s notification centre, and that’s not nothing; but it is, in the greater scheme of Hari’s project, hardly more than a parody of the by now very familiar “digital diet book” — the sort of book that, as Hari eventually points out, can no more address the problems filling this book than a diet book can address epidemic obesity.

Many of the things we need to do to recover our attention and focus “are so obvious they are banal,” Hari writes: “slow down, do one thing at a time, sleep more… Why can’t we do the obvious things that would improve our attention? What forces are stopping us?”

So, having had his fun with us, Hari begins to sketch in the high sides of the pot in which he finds us being coddled.

The whole of the digital economy is powered by breaks in our attention. The finest minds in the digital business are being paid to create ever-more-addicting experiences. According to former Google engineer Tristan Harris, “we shape more than eleven billion interruptions to people’s lives every day.” Aza Raskin, co-founder of the Center for Humane Technology, calls the big tech companies “the biggest perpetrators of non-mindfulness in the world.”

Social media is particularly insidious, promoting outrage among its users because outrage is wildly more addictive than real news. Social media also promotes loneliness. Why? Because lonely people will self-medicate with still more social media. (That’s why Facebook never tells you which of your friends are nearby and up for a coffee: Facebook can’t make money from that.)

We respond to the anger and fear a digital diet instils with hypervigilance, which wrecks our attention even further and damages our memory to boot. If we have children, we’ll keep them trapped at home “for their own safety”, though our outdoor spaces are safer than they have ever been. And when that carceral upbringing shatters our children’s attention (as it surely will), we stuff them with drugs, treating what is essentially an environmental problem. And on and on.

And on. The problem is not that Stolen Focus is unfocused, but that it is relentless: an unfeasibly well-supported undergraduate rant that swells — as the hands of the clock above the bar turn round and the beers slide down — to encompass virtually every ill on the planet, from rubbish parenting to climate change.

“If the ozone layer was threatened today,” writes Hari, “the scientists warning about it would find themselves being shouted down by bigoted viral stories claiming the threat was all invented by the billionaire George Soros, or that there’s no such thing as the ozone layer anyway, or that the holes were really being made by Jewish space lasers.”

The public campaign Hari wants Stolen Focus to kick-start (there’s an appendix; there’s a weblink; there’s a newsletter) involves, among other things, a citizen’s wage, outdoor play, limits on light pollution, public ownership of social media, changes in the food supply, and a four-day week. I find it hard to disagree with any of it, but at the same time I can’t rid myself of the image of how, spiritually refreshed by War and Peace, consumed in just a few sittings in a Provincetown coffee shop, Hari must (to quote Stephen Leacock) have “flung himself from the room, flung himself upon his horse and rode madly off in all directions”.

If you read just one book about how the modern world is driving us crazy, read this one. But why would you read just one?

Know when you’re being played

Calling Bullshit by Jevin D West and Carl T Bergstrom, and Science Fictions by Stuart Ritchie, reviewed for The Telegraph, 8 August 2020

Last week I received a press release headlined “1 in 4 Brits say ‘No’ to Covid vaccine”. This is was such staggeringly bad news, I decided it couldn’t possibly be true. And sure enough, it wasn’t.

Armed with the techniques taught me by biologist Carl Bergstrom and data scientist Jevin West, I “called bullshit” on this unwelcome news, which after all bore all the hallmarks of clickbait.

For a start, the question on which the poll was based was badly phrased. On closer reading it turns out that 25 per cent would decline if the government “made a Covid-19 vaccine available tomorrow”. Frankly, if it was offered *tomorrow* I’d be a refusenik myself. All things being equal, I prefer my medicines tested first.

But what of the real meat of the claim — that daunting figure of “25 per cent”?  It turns out that a sample of 2000 was selected from a sample of 17,000 drawn from the self-selecting community of subscribers to a lottery website. But hush my cynicism: I am assured that the sample of 2000 was “within +/-2% of ONS quotas for Age, Gender, Region, SEG, and 2019 vote, using machine learning”. In other words, some effort has been made to make the sample of 2000 representative of the UK population (but only on five criteria, which is not very impressive. And that whole “+/-2%” business means that up to 40 of the sample weren’t representative of anything).

For this, “machine learning” had to be employed (and, later, “a proprietary machine learning system”)? Well, of course not.  Mention of the miracle that is artificial intelligence is almost always a bit of prestidigitation to veil the poor quality of the original data. And anyway, no amount of “machine learning” can massage away the fact that the sample was too thin to serve the sweeping conclusions drawn from it (“Only 1 in 5 Conservative voters (19.77%) would say No” — it says, to two decimal places, yet!) and is anyway drawn from a non-random population.

Exhausted yet? Then you may well find Calling Bullshit essential reading. Even if you feel you can trudge through verbal bullshit easily enough, this book will give you the tools to swim through numerical snake-oil. And this is important, because numbers easily slip  past the defences we put up against mere words. Bergstrom and West teach a course at the University of Washington from which this book is largely drawn, and hammer this point home in their first lecture: “Words are human constructs,” they say; “Numbers seem to come directly from nature.”

Shake off your naive belief in the truth or naturalness of the numbers quoted in new stories and advertisements, and you’re half way towards knowing when you’re being played.

Say you diligently applied the lessons in Calling Bullshit, and really came to grips with percentages, causality, selection bias and all the rest. You may well discover that you’re now ignoring everything — every bit of health advice, every over-wrought NASA announcement about life on Mars, every economic forecast, every exit poll. Internet pioneer Jaron Lanier reached this point last year when he came up with Ten Arguments for Deleting Your Social Media Accounts Right Now. More recently the best-selling Swiss pundit Rolf Dobelli has ordered us to Stop Reading the News. Both deplore our current economy of attention, which values online engagement over the provision of actual information (as when, for instance, a  review like this one gets headlined “These Two Books About Bad Data Will Break Your Heart”; instead of being told what the piece is about, you’re being sold on the promise of an emotional experience).

Bergstrom and West believe that public education can save us from this torrent of micro-manipulative blither. Their book is a handsome contribution to that effort. We’ve lost Lanier and Dobelli, but maybe the leak can be stopped up. This, essentially, is what the the authors are about; they’re shoring up the Enlightenment ideal of a civic society governed by reason.

Underpinning this ideal is science, and the conviction that the world is assembled on a bedrock of truth fundamentally unassailable truths.

Philosophical nit-picking apart, science undeniably works. But in Science Fictions Stuart Ritchie, a psychologist based at King’s College, shows just how contingent and gimcrack and even shoddy the whole business can get. He has come to praise science, not to bury it; nevertheless, his analyses of science’s current ethical ills — fraud, hype, negligence and so on — are devastating.

The sheer number of problems besetting the scientific endeavour becomes somewhat more manageable once we work out which ills are institutional, which have to do with how scientists communicate, and which are existential problems that are never going away whatever we do.

Our evolved need to express meaning through stories is an existential problem. Without stories, we can do no thinking worth the name, and this means that we are always going to prioritise positive findings over negative ones, and find novelties more charming than rehearsed truths.

Such quirks of the human intellect can be and have been corrected by healthy institutions at least some of the time over the last 400-odd years. But our unruly mental habits run wildly out of control once they are harnessed to a media machine driven by attention.  And the blame for this is not always easily apportioned: “The scenario where an innocent researcher is minding their own business when the media suddenly seizes on one of their findings and blows it out of proportion is not at all the norm,” writes Ritchie.

It’s easy enough to mount a defence of science against the tin-foil-hat brigade, but Ritchie is attempting something much more discomforting: he’s defending science against scientists. Fraudulent and negligent individuals fall under the spotlight occasionally, but institutional flaws are Ritchie’s chief target.

Reading Science Fictions, we see field after field fail to replicate results, correct mistakes, identify the best lines of research, or even begin to recognise talent. In Ritchie’s proffered bag of solutions are desperately needed reforms to the way scientific work is published and cited, and some more controversial ideas about how international mega-collaborations may enable science to catch up on itself and check its own findings effectively (or indeed at all, in the dismal case of economic science).

At best, these books together offer a path back to a civic life based on truth and reason. At worst, they point towards one that’s at least a little bit defended against its own bullshit. Time will tell whether such efforts can genuinely turning the ship around, or are simply here to entertain us with a spot of deckchair juggling. But there’s honest toil here, and a lot of smart thinking with it. Reading both, I was given a fleeting, dizzying reminder of what it once felt like to be a free agent in a factual world.

Pig-philosophy

Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.

“Chuck one over here, Candy Man!”

 

Watching Ad Astra for New Scientist, 18 September 2019

It is 2033. Astronaut Roy McBride (Brad Pitt) is told that his father Clifford, the decorated space explorer, may still be alive, decades after he and the crew of his last mission fell silent in orbit around Neptune.

Clifford’s Lima mission was sent to the outer edges of the heliosphere – the region of the sun’s gravitational influence – the better to scan the galaxy’s exoplanets for intelligent life. Now the Lima’s station’s antimatter generator is triggering electrical storms on distant Earth, and all life in the solar system is threatened.

McBride sets off on a secret mission to Mars. Once there, he is handed a microphone. He reads out a message to his dad. When he finishes speaking, he and the sound engineers pause, as if awaiting an instant reply from Clifford, the message’s intended recipient, somewhere in orbit around Neptune. What?

Eventually a reply is received (ten days later, presumably, given that Mars and Neptune are on average more than four billion kilometres apart). No-one wants to tell McBride what his dad said except the woman responsible for the Mars base (the wonderful Ruth Negga, looking troubled here, as well she might). The truths she shares about Roy’s father convince the audience, if not Roy himself, that the authorities are quite right to fear Clifford, quite right to seek a way to neutralise him, and quite right in their efforts to park his unwitting son well out of the way.

But Roy, at great risk to himself, and with actions that will cost several lives, is determined on a course for Neptune, and a meeting with his dad.

Ad Astra is a psychodrama about solipsistic fathers and abandoned sons, conducted in large part through monologues and close-ups of Brad Pitt’s face. And this is as well, since Pitt’s performance is easily the most coherent and thrilling element in a film that is neither.

Not, to be fair, that Ad Astra ever aspired to be exciting in any straightforward way. Pirates and space monkeys aside (yes, you read that right) Ad Astra is a serious, slow-burn piece about our desire to explore the world, and our desire to make meaning and connection, and how these contrary imperatives tear us apart in the vastness of the cosmic vacuum.

It ought to have worked.

The fact that it’s serious should have worked: four out of five of writer-director James Gray’s previous films were nominated for Cannes Film Festival’s Palme d’Or. Ad Astra itself was inspired by a Pulitzer Prize-winning collection of poems by Tracy K. Smith, all about gazing up at the stars and grieving for her father.

The film’s visuals and sound design should have worked. It draws inspiration for its dizzying opening sequence from the well-documented space-parachuting adventures of Felix Baumgartner in 2012, adopts elsewhere the visual style and sound design of Alfonso Cuarón’s 2013 hit film Gravity, and, when we get to Mars, tips its hat to the massy, reinforced concrete interiors of Denis Villeneuve’s 2017 Blade Runner 2049. For all that, it still feels original: a fully realised world.

The incidental details ought to have worked. There’s much going on in this film to suggest that everyone is quietly, desperately attempting to stabilise their mood, so as not to fly off the handle in the cramped, dull, lifeless interiors beyond Earth. The whole off-world population is seen casually narcotising itself: “Chuck one over here, Candy Man!” Psychological evaluations are a near-daily routine for anyone whose routine brings them anywhere near an airlock, and these automated examinations (shades of Blade Runner 2049 again) seem to be welcomed, as one imagines Catholic confession would be welcomed by a hard-pressed believer.

Even the script, though a mess, might have worked. Pitt turns the dullest lines into understated character portraits with a well-judged pause and the tremor of one highly trained facial muscle. Few other cast members get a word in edgewise.

What sends Ad Astra spinning into the void is its voiceover. Grey is a proven writer and director, and he’s reduced Ad Astra’s plot down to seven-or-so strange, surreal, irreducible scenes, much in the manner of his cinematic hero Stanley Kubrick. Like Kubrick, he’s kept dialogue to the barest minimum. Like Kubrick, he’s not afraid of letting a good lead actor dominate the screen. And then someone – can it really have been Gray himself? – had the bright idea to vitiate all that good work by sticking Roy McBride’s internal monologue over every plot point, like a string of Elastoplasts.

Consequently, the audience are repeatedly kicked out of the state of enchantment they need to inhabit if they’re going to see past the plot holes to the movie’s melancholy heart.

The devil of this film is that it fails so badly, even as everyone is working so conspicuously hard to make a masterpiece. “Why go on?” Roy asks in voiceover, five minutes before the credits roll. “Why keep trying?”

Why indeed?

Human/nature

Was the climate crisis inevitable? For the Financial Times, 13 September 2019

Everything living is dying out. A 2014 analysis of 3,000 species, confirmed by recent studies, reveals that half of all wild animals have been lost since 1970. The Amazon is burning, as is the Arctic.

An excess of carbon dioxide in the atmosphere, meanwhile, has not only played havoc with the climate but also reduced the nutrient value of plants by about 30 per cent since the 1950s.

And we’re running out of soil. In the US, it’s eroding 10 times faster than it’s being replaced. In China and India, the erosion is more than three times as bad. Five years ago, the UN Food and Agriculture Organization claimed we had fewer than 60 years of harvests left if soil degradation continued at its current rate.

Why have we waited until we are one generation away from Armageddon before taking such problems seriously?

A few suggestions: first, the environment is far too complicated to talk about — at least on the tangled information networks we have constructed for ourselves.

Second, we’re lazy and we’re greedy, like every other living thing on the planet — though because most of us co-operate with each other, we are arguably the least greedy and least lazy animals around.

Where we fall down is in our tendency to freeload on our future selves. “Discounting the future” is one of our worst habits, and one that in large part explains why we leave even important, life-and-death actions to the last minute.

Here’s a third reason why we’re dealing so late with climate change. It’s the weirdest, and maybe the most important of the three. It’s that we know we are going to die.

Thinking about environmental threats reminds us of our own mortality, and death is a prospect so appalling we’ll do anything — anything — to stop thinking about it.

“I used to wonder how people could stand the really demonic activity of working behind those hellish ranges in hotel kitchens, the frantic whirl of waiting on a dozen tables at one time,” wrote Ernest Becker in his Pulitzer-winning meditation The Denial of Death in 1973.

“The answer is so simple that it eludes us: the craziness of these activities is exactly that of the human condition. They are ‘right’ for us because the alternative is natural desperation.”

Psychologists inspired by Becker have run experiments to suggest it’s the terror of death that motivates consciousness and all its accomplishments. “It raised the pyramids in Egypt and razed the Twin Towers in Manhattan,” is the memorable judgment of the authors of 2015’s best-selling book The Worm at the Core.

This hardly sounds like good news. But it may offer us, if not a solution to the current crisis, at least a better, healthier and more positive way of approaching it.

No coping mechanism is infallible. We may be profoundly unwilling to contemplate our mortality, and to face up to the slow-burn, long-term threats to our existence, but that anxiety can’t ultimately be denied. Our response is to bundle it into catastrophes — in effect to construe the world in terms of crises to make everyday existence bearable.

Even positive visions of the future assume the necessity for cataclysmic change: why else do we fetishise “disruption”? “The concept of progress is to be grounded in the idea of the catastrophe,” as the German philosopher Walter Benjamin put it.

Yes, we could have addressed climate change much more easily in the 1970s, when the crisis wasn’t so urgent. But the fact is, we’re built for urgent action. A flood. A drought. A famine. We know where we are in a catastrophe. It may be that our best is yet to come.

Will our best be enough? Will we move quickly and coherently enough to save ourselves from the catastrophes attendant on massive climate change? That’s a hard question to answer.

The earliest serious attempts at modelling human futures were horrific. One commentator summed up Thomas Malthus’s famous 1798 Essay on the Principle of Population as “150 pages of excruciatingly detailed travellers’ accounts and histories . . . of bestial life, sickness, weakness, poor food, lack of ability to care for young, scant resources, famine, infanticide, war, massacre, plunder, slavery, cold, hunger, disease, epidemics, plague, and abortion.”

Malthus, an English cleric driven up the wall by positive Enlightenment thinkers such as Godwin and Condorcet, set out to remind everybody that people were animals. Like animals, their populations were bound eventually to exceed the available food supply. It didn’t matter that they dressed nicely or wrote poetry. If they overbred, they would starve.

We’ve been eluding this Malthusian trap for centuries, by bolting together one cultural innovation after another. No bread? Grow soy. No fish? Breed insects. Eventually, on a finite planet, Malthus will have his revenge — but when?

The energy thinker Vaclav Smil’s forthcoming book Growth studies the growth patterns of everything from microorganisms to mammals to entire civilisations. But the Czech-Canadian academic is chary about breaking anything as complicated as humanity down to a single metric.

“In the mid-1980s,” he recalls, “people used to ask me, when would the Chinese environment finally collapse? I was writing about this topic early on, and the point is, it was never going to collapse. Or it’s constantly collapsing, and they’re constantly fixing parts of it.”

Every major city in China has clean water and improving air quality, according to Smil. A few years ago people were choking on the smog.

“It’s the same thing with the planet,” he says. “Thirty years ago in Europe, the number-one problem wasn’t global warming, it was acid rain. Nobody mentions acid rain today because we desulphurised our coal-fired power plants and supplanted coal with natural gas. The world’s getting better and worse at the same time.”

Smil blames the cult of economics for the way we’ve been sitting on our hands while the planet heats up. The fundamental problem is that economics has become so divorced from fundamental reality,” he says.

“We have to eat, we have to put on a shirt and shoes, our whole lives are governed by the laws that govern the flows of energy and materials. In economics, though, everything is reduced to money, which is only a very imperfect measure of those flows. Until economics returns to the physical rules of human existence, we’ll always be floating in the sky and totally detached from reality.”

Nevertheless, Smil thinks we’d be better off planning for a good life in the here and now, and this entails pulling back from our current levels of consumption.

“But we’re not that stupid,” he says, “and we may have this taken care of by people’s own decision making. As they get richer, people find that children are very expensive, and children have been disappearing everywhere. There is not a single European country now in which fertility will be above replacement level. And even India is now close to the replacement rate of 2.1 children per family.”

So are we out of the tunnel, or at the end of the line? The brutal truth is, we’ll probably never know. We’re not equipped to know. We’re too anxious, too terrified, too greedy for the sort of certainty a complex environment is simply not going to provide.

Now that we’ve spotted this catastrophe looming over our heads, it’s with us for good. No one’s ever going to be able to say that it’s truly gone away. As Benjamin tersely concluded, “That things ‘just go on’ is the catastrophe.”