An explosion in a radioactive cotton candy factory

 

Reading Under Alien Skies by Phil Plait for New Scientist, 7 June 2023 

You may know him better as “the Bad Astronomer”, whose blog demolishes misconceptions and frauds about the cosmos. Now the tireless Phil Plait is taking us on a journey, to our satellite moon and beyond, past Mars and the Belt, past Saturn and Pluto, to other stars, to binaries and clusters, to nebulae and to the end of all things, as he sends us spiraling past the Schwartzchild radius of a black hole. Throughout (and with a few tiny exceptions), he and we see only what poor, bare forked humanity is equipped by nature to see. This is the cosmos as we would feel, hear and see it. Some measure of security and comfort is provided by spaceships and starships of ever-increasing unlikelihood but, deep down, we’re on our own out here, and trembling at the magnificence of it all.

This artful premise gives Plait licence to discuss what our real future in the solar system might look like, while at the same time exploring some startling stellar exotica. (Finally, I understand the Orion Nebula!)

In the final chapters, on star clusters, nebulae and black holes, our suspension of disbelief starts to come unstuck. This is partly to do with the fact that there’s nothing for us to smell, hear, walk on or trip over. (By contrast, Plait’s evocations of our own solar system are superbly sensual.)

Sooner or later we will be overwhelmed by a universe a lot bigger than we are. Here Plait describes our likely response as we witness the birth of stars:

“Your mind tries to comprehend what you’re seeing, churning out analogies rapid-fire — it’s like an explosion in a radioactive cotton candy factory, like being suspended in a frozen fireworks display, like flying through a million auroras — but in the end you fail. Humans never evolved to comprehend magnificence on a scale like this.”

Some of the grandest wonders in his arsenal are simply invisible to the naked eye. Just now and again, then, the valiant captain of our imaginary starship tweaks the viewscreens, showing us things we wouldn’t have seen by just leaning out the window; and on those rare occasions we may reasonably begin to wonder: what on eartyh are we doing out here? Why did we come all this way, just to watch a video? Couldn’t the same veritée have been achieved, sitting in front of a 5K screen in our pyjamas?

You could argue that Plait should have stuck to his guns, and even in the chapter on black holes, described only what human beings would see with their own eyes. But this is a game we abandoned centuries ago. Our machines have better access to the world than we do, and this has been true at least since Dutch lens grinders invented the telescope.

Much more telling, I think: virtually every wonder in this book is to do with scale. Bigger, brighter, heavier things dominate this account. But where are the stranger things? Is there anything in this account as abidingly weird as — oh, I don’t know — a tree? A house cat? A plate of fish and chips?

Earth beats the rest of the known cosmos hands down for complexity and change. And, yes, there may well be other biomes out there — but Plait can’t just invent them out of whole cloth. That would be fantasy, and this is a book rooted, however speculatively, in the known.

Plait is an able, resourceful and, on occasion, downright visionary guide to the far reaches of outer space. If this book leaves a few readers feeling very slightly disappointed, it’s not Plait who fell short; it’s the cosmos.

For 300 exhilarating pages, short-lived, fragile and under-equipped reader have relied upon imaginary technology to get them places they don’t belong. It is no bad thing if a few of them close this exhilarating book with a renewed feeling of reverence for their own world.

A pile of dough

Reading Is Maths Real? by Eugenia Cheng, 17 May 2023

Let’s start with an obvious trick question: why does 1 plus 1 equal 2? Well, it often doesn’t. Add one pile of dough to one pile of dough and you get, well, one pile of dough.

This looks like a twisty and trivial point, but it isn’t. Mathematics describes the logical operations of logical worlds, but you can dream up any number of those, and you’re going to need many more than one of them to even come close to modelling the real world.

“Deep down,” writes mathematician Eugenia Cheng, “maths isn’t about clear answers, but about increasingly nuanced worlds in which we can explore different things being true.”

Cheng wants the reader to ask again all those “stupid” questions they asked about mathematics as kids, and so discover what it feels like to be a real mathematician. Sure enough, mathematicians turn out to be human beings, haunted by doubts, saddled with faulty memories, blessed with unsuspected resources of intuition, guided by imagination. Mathematics is a human pursuit, depicted here from the inside.

We begin in the one-dimensional world of real numbers, and learn in what kinds of worlds numbers can be added together in any order (“commutativity”) and operations grouped together any-old-how (“associativity”). Imaginary numbers (which can’t be expressed as digits; think pi) add a second dimension to our mathematical world, and sure enough there are now patterns we can see that we couldn’t see before, “when we were all squashed into one dimension”.

Keep adding dimensions. (The more we add to our mathematical universe, however, the less we can rely on our visual imagination, and the more we come to rely on algebra.) Complex numbers (which have a real part and an imaginary part) give us the field of complex analysis, on which modern physics depends.

And we don’t stop there. Cheng’s object is not to teach us maths, but to show us what we don’t know; we eventually arrive at a terrific description of mathematical braids in higher dimensions that at the very least we might find interesting, even if we don’t understand it. This is the generous impulse driving this book, and it’s splendidly realised.

Alas, Is Maths Real?, not content with being a book about what it is like to be a mathematician, also wants to be a book about what it is like to be Eugenia Cheng, and success, in this respect, leads to embarrassment.

We’ll start with the trivia and work up.

There’s Cheng’s inner policeman, reminding her, as she discusses the role of pictures in mathematics “to acknowledge that this is thus arguably ableist and excludes those who can’t see.”

There are narcissistic exclamations that defy parody, as when Cheng explains that “the only thing I want everyone to care about is reducing human suffering, violence, hunger, prejudice, exclusion and heartbreak.” (Good to know.)

There are the Soviet-style political analogies for everything. Imaginary and complex numbers took a while to be accepted as numbers because, well, you know people: “some people lag behind, perhaps accepting women and black people but not gay people, or maybe accepting gay, lesbian and bisexual people but not transgender people.”

A generous reader may simply write these irritations off, but then Cheng’s desire to smash patriarchal power structures with the righteous hammer of ethnomathematics (which looks for “other types of mathematics” overlooked, undervalued or suppressed by the colonialist mainstream) tips her into some depressingly hackneyed nonsense. “Contemporary culture,” she tells us, “is still baffled by how ancient cultures were able to do things like build Stonehenge or construct the pyramids.”

Really? The last time I looked, the answers were (a) barges and (b) organised labour.

Cheng tells us she is often asked how she comes up with explanations and diagrams that bring clarity “to various sensitive, delicate, nuanced and convoluted social arguments.” Her training in the discipline of abstract mathematics, she explains, “makes those things come to me very smoothly.”

How smoothly? Well, quite early in the book, “intolerance of intolerance” becomes “tolerance” through a simple mathematical operation — a pratfall in ethics that makes you wonder what kind of world Cheng lives in. Cheng’s abstract mathematics may well be able solve her real-world problems — but I suspect most other people’s worlds feel a deal less tractable.

A lawyer scenting blood

Reading Unwired by Gaia Bernstein for New Statesman, 15 May 2023

In 2005, the journal Obesity Research published a study that, had we but known it, told us everything we needed to know about our coming addiction to digital devices.

The paper, “Bottomless Bowls: Why Visual Cues of Portion Size May Influence Intake” was about soup. Researchers led by Brian Wansink of Cornell University invited volunteers to lunch. One group ate as much soup as they wanted from regular bowls. The other ate from bowls that were bolted to the table and refilled automatically from below. Deprived of the “stopping signal” of an empty bowl, this latter group ate 73 per cent more than the others — and had no idea that they had over-eaten.

It’s a tale that must haunt the dreams of Asa Raskin, the man who invented, then publically regretted, “infinite scroll”. That’s the way mobile phone apps (from Facebook to Instagram, Twitter to Snapchat) provide endless lists of fresh content to the user, regardless of how much content has already been consumed.

Gaia Bernstein, a law professor at Seton Hall, includes infinite scroll in her book’s catalogue of addicting smart-device features. But this is as much about what these devices don’t do. For instance in his 2022 book Lost Focus Johann Hari wonders why Facebook never tells you which of your friends are nearby and up for a coffee. Well, the answer’s obvious enough: because lonely people, self-medicating with increasing quantities of social media, are Facebook’s way of making money.

What do we mean when we say that our mobile phones and tablets and other smart devices are addicting?

The idea of behavioural addiction was enshrined in DSM-5, the manual of mental disorders issued by the American Psychiatric Association, in 2015. DSM-5 is a bloated beast, and yet its flaky-sounding “Behavioral Addictions” — that, on the face of it, could make a mental disorder of everything we like to do — have proved remarkably robust, as medicine reveals how addictions, compulsions and enthusiasms share the same neurological pathways. You can addict humans (and not just humans) to pretty much anything. All you need to do is weaponise the environment.

And the environment, according to Bernstein’s spare, functional and frightening account, is most certainly weaponised. Teenagers, says Bernsteins, spend barely a third of the time partying that they used to in the 1980s, and the number of teens who get together with their friends has halved between 2000 and 2015. If ever there was a time to market a service to lonely people by making them more lonely, it’s now.

For those of us who want to sue GAMA (Google, Amazon, Meta, Apple) for our children’s lost childhood, galloping anxiety, poor impulse control, obesity, insomnia and raised suicide risk, the challenge is to demonstrate that it’s screentime that’s done all this damage to how they feel, and how they behave. And that, in an era of helicopter-parenting, is hard to do. danah boyd’s 2014 book It’s Complicated shows how difficult it’s going to be to separate the harms inflicted by little Johnny’s iPhone from all the benefits little Johnny enjoys. To hear boyd tell it, teenagers “obsessed” with social media are simply trying to recreate, for themselves and each other, a social space denied them by anxious parents, hostile authorities, and a mass media bent on exaggerating every conceivable out-of-doors danger.

The Covid pandemic has only exacerbated the stay-at-home, see-no-one trend among young people. Children’s average time online doubled from three to six hours during lockdown. It use to be that four per cent of children spent more than eight hours a day in front of a smart screen. Now over a quarter of them do.

Nor have we merely inherited this dismal state of affairs; we’ve positively encouraged it, stuffing our schools with technological geegaws in the fond and (as it turns out) wildly naive belief that I.T. will improve and equalise classroom performance. (It doesn’t, which this is why Silicon Valley higher-ups typically send their children to Waldorf schools, which use chalk, right up until the eighth grade.)

Bernstein, who regularly peppers an otherwise quite dry account with some eye-popping personal testimony, recalls meeting one mum whose son was set to studying history through a Roblox game mode called Assassin’s Creed Odyssey (set in ancient Greece). “Since then, whenever she asks him to get off Roblox, he insists it is homework.”

Bernstein believes there’s more to all this than a series of unfortunate events. She thinks the makers of smart devices knew exactly what they were doing, as surely as the tobacco companies knew that the cigarettes they manufactured caused cancer.

Bernstein reckons we’re at a legal tipping point: this is her playbook for making GAMA pay for addicting us to glass.

Here’s what we already know about how companies respond to being caught out in massive wrong-doing.

First, they ignore the problem. (In 2018 an internal Facebook presentation warned: “Our algorithm exploits the human brain’s attraction to divisiveness… If left unchecked [it would feed users] more and more divisive content to gain user attention & increase time on the platform.” Mark Zuckerberg responded by asking his people “not to bring something like that to him again”.)

Then they deny there’s a problem. Then they go to war with the science, refuting critical studies and producing their own. Then, they fend off public criticism — and place responsibility on the consumer — by offering targeted solutions. (At least the filter tips added to cigarettes were easy to use. Most “parental controls” on smart devices are so cumbersome and inaccessible as to be unuseable.) Finally, they offer to create a system of self-regulation — by which time, Bernstein reckons, you’ve won, or you will have won, so long as you have proven that the people you’re going after intended, all along, to addict their customers.

You might, naively, imagine that this matter rests upon the science. It doesn’t, and Bernstein’s account of the screentime science wars is quite weak — a shallow confection built largely of single studies.

The scientific evidence is stronger than Bernstein makes it sound, but there’s still a problem: it’ll take a generation to consolidate. There are other, better ways to get at the truth in a timely manner; for instance, statistics, which will tell you that we have the largest ever recorded epidemic of teenage mental health problems, whose rising curves correlate with terrifying neatness with the launch of various social media platforms.

Bernstein is optimistic: “Justifying legal interventions,” she says, “is easier when the goal is to correct a loss of autonomy”, and this after all, is the main charge she’s laying at GAMA’s door: that these companies have created devices that rob us of our will, leaving us ever more civically and psychologically inept, the more we’re glued to their products.

Even better (at least from the point of view of a lawyer scenting blood), we’re talking about children. “Minors are the Achilles heel,” Bernstein announces repeatedly, and with something like glee. Remember how the image of children breathing in their parents’ second-hand smoke broke big tobacco? Well, just extend the analogy: here we have a playground full of kids taking free drags of Capstans and Players No. 6.

Unwired is not, and does not aspire to be, a comprehensive account of the screen-addiction phenomenon. It exists to be used: an agenda for social change through legal action. It is a knife, not a brush. But it’ll be of much more than academic value to those of us whose parenting years were overshadowed by feelings of guilt, frustration and anxiety, as we fought our hopeless battles, and lost our children to TikTok and Fortnite.

On not being a horrible person

Reading The Human Mind by Paul Bloom for New Scientist, 11 May 2023

Inspired, he tells us, by The Origin of the Universe, John Barrow’s 1994 survey of what was then known about cosmology, the Canadian American psychologist Paul Bloom set about writing an introductory tome of his own: a brief yet comprehensive guide to the human mind.

Emulating Barrow’s superb survey has been hard because, as Bloom cheekily points out, “the mysteries of space and time turn out to be easier for our minds to grasp than those of consciousness and choice.”

The first thing to say — though hardly the most interesting — is that Bloom nevertheless succeeds, covering everything from perception and behaviour to language and development; there’s even a small but very worthwhile foray into abnormal psychology. It’s an account that is positive, but never self-serving. Problems in reproducing some key studies, the field’s sometimes scandalous manipulation of statistics, and the once prevailing assumption that undergrad volunteers could accurately represent the diversity of the entire human species, are serious problems, dealt with seriously.

Of course Bloom does more than simply set out the contents of the stall (with the odd rotten apple here and there); he also explores psychology’s evolving values. He recalls his early behaviourist training, in a climate hostile to (then rather woolly) questions about consciousness. “If we were asked to defend our dismissal of consciousness,” he recalls, “we would point out that intelligence does not require sentience.”

Intelligence is no longer the field’s only grail, and consciousness is now front and centre in the science of the mind. This is not only a technical advance; it’s an ethical one. In 1789 Jeremy Bentham asked whether the law could ever refuse its protection to “any sensitive being”, and pointed out that “The question is not, Can [certain beings] reason?, nor Can they talk? but, Can they suffer?”

Suffering requires consciousness, says Bloom; understanding one enables us to tackle the other; so the shift in interest to consciousness itself is a welcome and humanising move.

This strong belief in the humanitarian potential of psychology allows Bloom to defend aspects of his discipline that often discomfort outside observers. He handles issues of environmental and genetic influences on the mind very well, and offers a welcome and robust defence of Alfred Binet’s 1905 invention, the measure of general intelligence or “intelligence quotient”. Bloom shows that the IQ test is as robust a metric as anything in social science. We know that a full half of us score less than 100 on that test; should this knowledge not fill us with humility and compassion? (Actually our responses tend to be more ambiguous. Bloom points out that Nazi commentators hated the idea of IQ because they thought Jews would score better than they would.)

Bloom is concerned to demonstrate that minds do more than think. The privileging of thinking over feeling and intuiting and suffering is a mistake. “A lot depends on what is meant by ‘rational.’ Bloom writes. If you’re stepping outside and it’s raining and you don’t want to get wet, it’s rational to bring an umbrella. But rationality defined in this manner is separate from goodness. “Kidnapping a rich person’s child might be a rational way to achieve the goal of getting a lot of money quickly,” Bloom observes, “so long as you don’t have other goals, such as obeying the law and not being a horrible person.”

Bloom’s ultimate purpose is to explain how a robustly materialistic view of the mind is fully compatible with the existence of choice and morality and responsibility. This middle-of-the-road approach may disappoint intellectual storm-chasers, but the rest of us can can be assured of an up-to-the-minute snapshot of the field, full of unknowns and uncertainties, yes, and speculations, and controversies — but guided by an ever-more rounded idea of what it is to be human.

The mind unlocked

Reading The Battle for Your Brain by Nita Farahany for New Scientist, 19 April 2023

Iranian-American ethicist and lawyer Nita Farahany is no stranger to neurological intervention. She has sought relief from her chronic migraines in “triptans, anti-seizure drugs, antidepressants, brain enhancers, and brain diminishers. I’ve had neurotoxins injected into my head, my temples, my neck, and my shoulders; undergone electrical stimulation, transcranial direct current stimulation, MRIs, EEGs, fMRIs, and more.”

Few know better than Farahany what neurotechnology can do for people’s betterment, and this lends weight to her sombre and troubling account of a field whose speed of expansion alone should give us pause.

Companies like Myontec, Athos, Delsys and Noraxon already offer electromyography-generated insights to athletes and sports therapists. Control Bionics sells NeuroNode, a wearable EMG device for patients with degenerative neurological disorders, enabling them to control a computer, tablet, or motorised device. Neurable promises “the mind unlocked” with its “smart headphones for smarter focus.” And that’s before we even turn to the fast-growing interest in implantable devices; Synchron, Blackrock Neurotech and Elon Musk’s Neuralink all have prototypes in advanced stages of development.

Set aside the legitimate medical applications for a moment; Farahany is concerned that neurotech applications that used to let us play video games, meditate, or improve our focus have opened the way to a future of brain transparency “in which scientists, doctors, governments, and companies may peer into our brains and minds at will.”

Think it can’t be done? Think again. In 2017 A research team led by UC Berkeley computer scientist Dawn Song reported an experiment in which videogamers used a neural interface to control a video game. As they played, the researchers inserted subliminal images into the game and watched for unconscious recognition signals. This game of neurological Battleships netted them one player’s credit card PIN code — and their home address.

Now Massachusetts-based Brainwave Science is selling a technology called iCognative, which can extract information from people’s brains. At least, suspects are shown pictures related to crimes and cannot help but recognise whatever they happen to recognise. For example, a murder weapon. Emirati authorities have already successfully prosecuted two cases using this technology.

This so-called “brain fingerprinting” technique is as popular with governments (Bangladesh, India, Singapore, Australia) as it is derided by many scientists.

More worrying are the efforts of companies, in the post-Covid era, to use neurotech in their continuing effort to control the home-working environment. So-called “bossware” programmes already take regular screenshots of employees’ work, monitor their keystrokes and web usage, and photograph them at (or not at) their desks. San Francisco bioinformatics company Emotiv now offers to help manage your employees’ attention with its MN8 earbuds. These can indeed be used to listen to music or participate in conference calls — and also, with just two electrodes, one in each ear, they claim to be able to record employees’ emotional and cognitive functions in real time.

It’ll come as no surprise if neurotech becomes a requirement in modern workplaces: no earbuds, no job. This sort of thing has happened many times already.

“As soon as [factory] workers get used to the new system their pay is cut to the former level,” complained Vladimir Lenin in 1912. “The capitalist attains an enormous profit for the workers toil four times as hard as before and wear down their nerves and muscles four times as fast as before.”

Six years later, he approved funding for a Taylorist research institute. Say what you like about industrial capitalism, its logic is ungainsayable.

Farahany has no quick fixes to offer for this latest technological assault on the mind — “the one place of solace to which we could safely and privately retreat”. Her book left me wondering what to be more afraid of: the devices themselves, or the glee with which powerful institutions seize upon them.

“And your imaginations would again run out of room…”

Reading The Beetle in the Anthill and The Waves Extinguish the Wind by Arkady and Boris Strugatsky. For The Times, 18 April 2023 

In Arkady and Boris Strugatsky’s The Beetle in the Anthill, zoopsychologist Lev Abalkin and his alien companion, a sentient canine “bighead” called Puppen-Itrich, are sent to the ruined and polluted planet Hope to find out what happened to its humanoid population. Their search leads them through an abandoned city to a patch of tarmac, which Puppen insists is actually an interdimensional portal. Lev wonders what new world this portal might it lead to?

‘“Another world, another world…” grumbles Puppen. “As soon as you made it to another world, you’d immediately begin to remake it in the image of your own. And your imaginations would again run out of room, and then you’d look for another world, and you’d begin to remake that one, too.”’

Futility sounds like a funny sort of foundation for an enjoyable book, but the Strugatskys wrote a whole series of them, and they amount to a singular triumph. Fresh translations of the final two “Noon universe” books are being published this month.

Arkady Strugatsky was born in Batumi, Georgia, in 1925. His kid brother Boris, born in Leningrad in 1933, outlived him by nearly twenty years, though without his elder brother to bounce ideas off, he found little to write about. The brothers dominated Soviet science fiction throughout the 1970s. Their earliest works towed the socialist-realist line and featured cardboard heroes who (to the reader’s secret relief) eventually sacrificed themselves for the good of Humanity. But their interest in people became too much for them, and they ended up writing angst-ridden masterpieces like Roadside Picnic (which everyone knows, because Andrei Tarkovsky’s film Stalker is based on it) and Lame Fate/Ugly Swans (which no-one knows, though Maya Vinokaur’s cracking English translation came out in 2020). Far too prickly to be published in the Soviet Union, their best work circulated in samizdat and (often unauthorised) translation.

The stories and novels of their “Noon universe” series ask what humans and aliens, meeting among the stars, would get up to with each other. Ordinary conflict is out of the question, since spacefaring civilisations have access to infinite resources. (The Noon universe is a techno-anarchist utopia, as is Iain Banks’s Culture, as is Star Trek’s Federation, and all for the same unassailable reason: there’s nothing to stop such a strange and wonderful society from working.)

The Strugatskys assume that there’s one only one really grand point to life as a technically advanced species — and that is to see to the universe’s well-being by nurturing sentience, consciousness, and even happiness.

To which you can almost hear Puppen grumble: Yes, but what sort of consciousness are you talking about? What sort of happiness are you promoting? In The Waves Extinguish the Wind (originally translated into English as The Time Wanderers), as he contemplates the possibility that humans are themselves being “gardened” by a superior race dubbed “Wanderers”, alien-chaser Toivo Glumov complains, “Nobody believes that the Wanderers intend to do us harm. That is indeed extremely unlikely. It’s something else that scares us! We’re afraid that they will come and do good, as they understand it!”’

Human beings and the Wanderers (whose existence can only ever be inferred, never proved) are the only sentient species who bother with outer space, and stick their noses into what’s going on among people other than themselves. And maybe Puppen is right; maybe such cosmic philanthropy boils down, in the end, to nothing more than vanity and overreach.

By the time of these last two novels, the Wanderers’ interference in human affairs is glaring, though it’s still impossible to prove.

In The Beetle in the Anthill Maxim Kammerer — a former adventurer, now a prominent official — is set on the trail of Lev Abalkin, a rogue “progressor” who is heading back to Earth.

Progressors travel from planet to planet and go undercover in “backward” societies to promote their technical and social development. But why shouldn’t Abalkin come home for a bit? He’s spent fifteen years doing a job he never wanted to do, in the remotest outposts, and he’s just about had enough. “Damn it all,” Kammerer complains, “would it really be so surprising if he had finally run out of patience, given up on COMCON and Headquarters, abandoned his military discipline, and come back to Earth to sort things out?”

By degrees, Kammerer and the reader discover why Kammerer’s bosses are so afraid of Abalkin’s return: he may, quite unwittingly, be a “Wanderer” agent.

So an individual’s ordinary hopes and frustrations play out against a vast, unsympathetic realpolitik. This is less science fiction than spy fiction — The Spy Who Came in From the Cold against a cosmic backdrop. And it’s tempting, though reductive, to observe the whole “noon universe” through a Cold War lens. Boris himself says in his afterword to The Beetle…:

“We were writing a tragic tale about the fact that even in a kind, gentle, and just world, the emergence of a secret police force (of any type, form, or style) will inevitably lead to innocent people suffering and dying.”

But the “noon universe” is no bald political parable, and it’s certainly not satire. Rather, it’s an unflinching working-out of what Soviet politics would look like if it did fulfil its promise. It’s a philosophical solvent, stripping away our intellectual vanities — our ideas of manifest destiny, our “outward urge” and all the rest — to expose our terrible littleness, and tremendous courage, in the face of a meaningless universe.

In their final novel The Waves Extinguish the Wind — assembled from fictional documents, reports, letters, transcripts and the like — we follow a somewhat older and wiser Maxim Kammerer as he oversees the heartbreaking efforts of his protogée Toivo Glumov to prove the existence of the Wanderers for once and for all. It’s an odyssey (involving peculiar disappearances, bug-eyed monsters and a bad-tempered wizard) that would be farcical, were it not tearing Glumov’s life to pieces.

Kammerer reckons Glumov is a fanatic. Does it even matter that humans are being tended and “progressed” by some superior race of gardener? “After all,” Kammerer says to his boss, Excellentz, ‘“what’s the worst we can say about the Wanderers?’” He’s thinking back to the planet called Hope, and that strange square of tarmac: ‘“They saved the population of an entire planet! Several billion people!”’

‘“Except they didn’t save the population of the planet,”’ Excellentz points out. ‘“They saved the planet from its population! Very successfully, too… And where the population has gone — that’s not for us to know.”’

The Value of Psychotic Experience

Reading The Best Minds: A story of friendship, madness and the tragedy of good intentions by Jonathan Rosen. For the Telegraph, 3 April 2023

This is the story of the author’s lifelong friendship with Michael Laudor: his neighbour growing up in in 1970s Westchester County; his rival at Yale; and his role-model as he abandoned a lucrative job for a life of literary struggle. Through what followed — debilitating schizophrenia; law school; and national celebrity as he publicised the plight of people living with mental illness — Laudor became something of a hero for Jonathan Rosen. And in June 1998, in the grip of yet another psychotic episode, he stabbed his pregnant girlfriend to death.

Her name was Caroline Costello and, says Rosen, she would not have been the first person to ascribe everything in Laudor’s disintegration, “from surface tremors to elliptical apocalyptic utterances, to the hidden depths of a complex soul.”

It will take Rosen over 500 pages to unpick Costello and Laudor’s tragedy, but he starts (and so, then, will we) with the Beats — that generation of writers who, he says, “released madness like a fox at a hunt, then rushed after — not to help or heal but to see where it led, and to feel more alive while the chase was on.”

As a young man, the poet Alan Ginsburg (whose Beat poem “Howl” gives this book its title) gave permission for his mother’s lobotomy. He spent the rest of his life atoning for this, “spinning the culture around him,” says Rosen, “into alignment with his mother’s psychosis”. By the summer of 1968 the Esalen Institute in Big Sur was sponsoring events under the heading “The Value of Psychotic Experience”.

To the amateur dramatics of the Beat generation (who declared mental illness a myth) add the hypocrisies of neo-Marxist critics like Paul de Mann (who proved texts mean whatever we want them to mean) and Franz Fanon (for whom violence was a cleansing force with a healing property) and the half-truths of anti-psychiatrists like Felix Guattari, who hid his clinic’s use of ECT to bolster his theory that schizophrenia was a disease of capitalist culture. Stir in the naiveties of the community health movement (that judged all asylums prisons), and policies ushered in by Jack Kennedy’s Community Mental Health Act of 1963 (“predicated,” says Rosen, “on the promise of cures that did not exist, preventions that remained elusive, and treatments that only work for those who were able to comply”); and you will begin to understand the sheer enormity of the doom awaiting Laudor and those he loved, once the 1980s had “backed up an SUV” over the ruins of America’s mental health provision.

This is a tragedy that enters wearing the motley of farce: on leaving Yale, Laudor became a management consultant, hired by the multinational firm Bain & Co. Rosen reckons Laudor got the job by talking with authority even when he didn’t know what he was talking about; “That, in fact, was why they had hired him.”

By the time Laudor enters Yale Law School, however, his progressive disintegration is clear enough to the reader (if not to the Dean of the school, Guido Calabresi, besotted with the way an understanding of mental health could, in Rosen’s words, “undermine the authority of courts, laws, facts, and judges, by exposing the irrational nature of the human mind”).

Soon Michael is being offered a million dollars for his memoir and another million for the movie rights. He’s the poster child for every American living with mental illness, and at the same time, he’s an all-American success story: proof that the human spirit can win out over psychiatric neglect.

Only it didn’t.

No one knows what to do about schizophrenia. The treatments don’t work, except sometimes they do, and when they do, no-one can really say why.

Rosen’s book is a devastating attack on a generation that refused to look this hard truth in the eye, and turned it, instead into some sort of flattering sociopolitical metaphor, and in so doing, deprived desperate people of care.

Rosen’s book is the mea culpa of a man who now understands that “the revolution in consciousness I hoped would free my mind… came at the expense of people whose mental pain I could not begin to fathom.”

It’s the darkest of literary triumphs, and the most gripping of unbearable reads.

The monster comes from outside

Reading To Battersea Park by Philip Hensher for The Spectator, 1 April 2023

We never quite make it to Battersea Park. By the time the narrator and his husband reach its gates, it’s time for them, and us, to return home.

The narrator is a writer, living just that little bit too far away from Battersea Park, inspired by eeriness of the Covid lockdown regime but also horribly blocked. All kinds of approaches to fiction beckon to him in his plight, and we are treated to not a few of them here.

Each section of this short novel embodies a literary device. We begin, maddeningly, in “The Iterative Mood” (“I would have”, “She would normally have”, “They used to…”) and we end in “Entrelacement”, with its overlapping stories offering strange resolutions to this polyphonous, increasingly surreal account of Lockdown uncanny. Every technique the narrator employs is an attempt to witness strange times using ordinary words.

Hensher didn’t just pluck this idea out of the void. Fiction has a nasty habit of pratfalling again and again at the feet of a contemporary crisis. Elizabeth Bowen’s The Heat of the Day (the Blitz) dribbles away into an underpowered spy thriller; Don DeLillo’s Falling Man (the September 11 attacks) only gets going in the last few dozen pages, when the protagonist quits New York for the poker-tournament circuit. Mind you, indirection may prove to be a winning strategy of itself. The most sheerly enjoyable section of To Battersea Park is a “hero’s journey” set in post-apocalyptic Whitstable. Hensher nails perfectly the way we distance ourselves from a crisis by romanticising it.

Milan Kundera wrote about this — about how “the monster comes from outside and is called History” — impersonal, uncontrollable, incalculable, incomprehensible and above all inescapable.

In To Battersea Park, Hensher speaks to the same idea, and ends up writing the kind of book Kundera wrote: one that appeals, first of all — almost, I would say, exclusively — to other writers.

In the middle of the book there’s a short scene in which a journalist interviews a novelist called Henry Ricks Bailey, and Bailey says:

“When people talk about novels, if they talk at all, they talk about the subject of those novels, or they talk about the life of the person who wrote it. This is a wonderful book, they say. It’s about a couple who fall in love during the Rwandan Genocide, they say… It’s as if all one had to do to write a novel is pick up a big box of stuff in one room and move it into the next.”

This (of course, and by design) borders on the infantile: the writer boo-hooing because the reader has had the temerity to beg a moral.

Hensher is more circumspect: he understands that the more you do right by events — the endless “and-then”-ness of everything — the less you’re going to to able to interest a reader, who has after all paid good money to bathe in causes and consequences, in “becauses” and “buts”.

To Battersea Park reveals all the ways we try to comprehend a world that isn’t good or fair, or causal, or even comprehensible. It’s about how we reduce the otherwise ungraspable world using conventions, often of our own devising. An elderly man fills half his house with a model railway. A dangerously brittle paterfamilias pumps the air out of his marriage. A blocked writer experiments with a set of literary devices. A horrified child sets sail in an imaginary boat. It’s a revelation: a comedy of suburban manners slowed to the point of nightmare.

That said, I get nervous around art that’s so directly addressed to the practitioners of that art. It’s a novel that teaches, more than it inspires, and a small triumph, in a world that I can’t help but feel is gasping for big ones.

 

A finite body in space

Reading Carlo Rovelli’s Anaximander and the Nature of Science for New Scientist, 8 March 2023

Astronomy was conducted at Chinese government institutions for more than 20 centuries, before Jesuit missionaries turned up and, somewhat bemused, pointed out that the Earth is round.

Why, after so much close observation and meticulous record-keeping did seventeenth-century Chinese astronomers still think the Earth was flat?

The theoretical physicist Carlo Rovelli, writing in 2007 (this is an able and lively translation of his first book) can be certain of one thing: “that the observation of celestial phenomena over many centuries, with the full support of political authorities, is not sufficient to lead to clear advances in understanding the structure of the world.”

So what gave Europe its preternaturally clear-eyed idea of how physical reality works? Rovelli’s ties his several answers — covering history, philosophy, politics and religion — to the life and thought and work of Anaximander, who was born 26 centuries ago in the cosmopolitan city (population 100,000) of Miletus, on the coast of present-day Turkey.

We learn about Anaximander, born 610 BCE, mostly through Aristotle. The only treatise of his we know about is now lost, aside from a tantalising fragment that reveals Anaximander’s notion that there exist natural laws that organise phenomena through time. He also figured out where wind and rain came from, and deduced, from observation, that all animals originally came from the sea, and must have arisen from fish or fish-like creatures.

Rovelli is not interested in startling examples of apparent prescience. Even a stopped watch is correct twice a day. He is positively enchanted, though, by the quality of Anaximander’s thought.

Consider the philosopher’s most famous observation — that the Earth is a finite body of rock floating freely in space.

Anaximander grasps that there is a void beneath the Earth through which heavenly bodies (the sun, to take an obvious example) must travel when they roll out of sight. This is really saying not much more than that, when a man walks behind a house, he’ll eventually reappear on the other side.

What makes this “obvious” observation so radical is that, applied to heavenly bodies, it contradicts our everyday experience.

In everyday life, objects fall in one direction. The idea that space does not have a privileged direction in which objects fall runs against common sense.

So Anaximander arrives at a concept of gravity: he calls it “domination”. Earth hangs in space without falling because does not have any particular direction in which to fall, and that is because there’s nothing around big enough to dominate it. You and I are much smaller than the earth, and so we fall towards it. “Up” and “down” are no longer absolutes. They are relative.

The second half of Rovelli’s book (less thrilling, and more trenchant, perhaps to compensate for the fact that it covers more familiar territory) explains how science, evolving out of Anaximander’s constructive yet critical attitude towards his teacher Thales, developed a really quite unnatural way of thinking.

Thales, says Anaximander, was a wise man who was wrong about everything being made of water. The idea that we can be wise and wrong at the same time, Rovelli says, can come only from a sophisticated theory of knowledge “according to which truth is accessible but only gradually, by means of successive refinements.”

All Rovelli’s wit and intellectual dexterity are in evidence in this thrilling early work, and almost all his charm, as he explains how Copernicus perfects Ptolemy, by applying Ptolemy’s mathematics to a better-framed question, and how Einstein perfected Newton by pushing Newton’s mathematics past certain a priori assumptions.

Nothing is thrown away in such scientific “revolutions”. Everything is repurposed.

“What on Earth do you mean?”

How the thought acts of the Oxford don J L Austin live on | Aeon Essays

Reading A Terribly Serious Adventure: Philosophy at Oxford 1900-60 by Nikhil Krishnan for the Telegraph, 6 March 2023

Philosophy is a creature of split impulses. The metaphysicians (think Plato) wonder what things mean; and the analysts (think Socrates) try and pin down what the metaphysicians are on about. When they get over-excited (which is surprisingly often) the metaphysicians turn into theologians, and the analysts become pedants in the mold of Thomas Grandgrind, the schoolmaster in Dickens’s Bleak House, concerned only with facts and numbers.

The “analytic” (or “linguistic” or “ordinary language”) philosophy practised at Oxford University in the first half of the last century is commonly supposed to have been at once pedantic and amateurish, “made a fetish of science yet showed an ignorance of it, was too secular, too productively materialist, too reactionary and somehow also too blandly moderate. The critics can’t, surely, all be right,” complains Nikhil Krishnan, launching a spirited, though frequently wry defence of his Oxford heroes: pioneers like Gilbert Ryle and A.J. Ayer and John Langshaw Austin, troopers like Peter Strawson and Elizabeth Anscombe, and many fellow travellers: Isaiah Berlin and Iris Murdoch loom large in an account that weaves biography with philosophy and somehow attains — heaven knows how — a pelucid clarity. This is one of those books that leaves readers feeling a lot cleverer than they actually are.

The point of Oxford’s analytical philosophy was, in Gilbert Ryle’s formulation, to scrape away at sentences “until the content of the thoughts underlying them was revealed, their form unobstructed by the distorting structures of language and idiom.”

In other words, the philosopher’s job was to rid the world of philosophical problems, by showing how they arise out of misunderstandings of language.

At around the same time, in the other place (Cambridge to you), Ludwig Wittgenstein was far advanced on an almost identical project. The chief lesson of Wittgenstein, according to a review by Bernard Williams, was that philosophy cannot go beyond language: “we are committed to the language of human life, and no amount of speculative investment is going to buy a passage to outer space, the space outside language.”

There might have been a rare meeting of minds between the two universities had Wittgenstein not invested altogether too much in the Nietzschean idea of what a philosopher should be (ascetic, migrainous, secretive to the point of paranoia); so, back in Oxford, it was left to dapper, deceptively bland manager-types like John Austin to re-invent a Socratic tradition for themselves.

Krishnan is too generous a writer, and too careful a scholar, to allow just one figure to dominate this account of over half a century’s intellectual effort. It’s clear, though, that he keeps a special place in his heart for Austin, whose mastery of the simple question and the pregnant pause, demand for absolute accuracy and imperviousness to bluster must have served him frighteningly well when interrogating enemy captives in the second world war.

While Wittgenstein concocted aphorisms and broke deck chairs, Austin’s mild-mannered, quintessentially English scepticism acted as a mirror, in which his every colleague and student struggled to recognise themselves: “What on Earth do you mean?” he would say.

Are kitchen scissors utensils or tools?

Why can we speak of someone as a good batsman but not as the right batsman?

Can someone complain of a pain in the waist?

Austin’s was a style of philosophy that’s easy to send up, harder to actually do.

It drove people mad. ”You are like a greyhound who doesn’t want to run himself,” A. J. Ayer once snapped, “and bites the other greyhounds, so that they cannot run either.”

But it’s not hard to see why this project — down-to-earth to the point of iconoclasm — has captured the imagination of philosopher and historian Nikhil Krishnan; he hails from India, whose long and sophisticated philosophical tradition is, he says, :”honoured today chiefly as a piece of inert heritage.”

Krishnan’s biographical approach may be a touch emollient; where the material forces him to choose, he puts the ideas before the idiosyncrasies. But his historical sense is sharp as he skips, in sixty short years, across whole epochs and through two world wars. Oxford, under Krishnan’s gaze, evolves from Churchman’s arcadia to New Elizabethan pleasure-park with a sort of shimmering H G Wells Time Machine effect.

John Austin died in 1960 at only forty-eight; this and his lack of easily-emulated Viennese mannerisms robbed him of much posthumous recognition. But by taking Austin’s critics seriously — and indeed, by stealing their thunder, in passage after passage of fierce analysis — Krishnan offers us a fresh justification of a fiercely practical project, in a field outsiders assume is supposed to be obscure.