Malleable meat

Watching Carey Born’s Cyborg: A Documentary for New Scientist

Neil Harbisson grew up in Barcelona and studied music composition at Dartington College of Arts in the UK. He lives with achromatism: he is unable to perceive colour of any kind. Not one to ignore a challenge, in 2003 Harbisson recruited product designer Adam Montandon to build him a head-mounted rig that would turn colours into musical notes that he could listen to through earphones. Now in his forties, Harbisson has evolved. The camera on its pencil-thin stalk and the sound generator are permanently fused to the back of his skull: he hears the colours around him through bone conduction.

If “hears” is quite the word: Watching Carey Born’s Cyborg: A Documentary, we occasionally catch Harbisson thinking seriously and intelligently about how the senses operate. He doesn’t hear colour so much as see it. His unconventional colour organ is startling to outsiders — what is that chap doing with an antenna springing out the back of his head? But Harbisson’s brain is long used to the antenna’s input, and treats it like any other visual information. Harbisson says he knew his experiment was a success when he started to dream in colour.

Body modification in art has a long history, albeit a rather vexed one. I can remember the Australian performance artist Stelarc hanging from flesh hooks, pronouncing on the obscolescence of the body. (My date did not go well.) Stelarc doesn’t do that sort of thing any more. Next year he celebrates his eightieth birthday. You can declare victory over the flesh as much as you like: time gets the last laugh.

The way Harbisson has hacked his own perceptions leaves him with very little to do but talk about his experiences. He can’t really demonstrate them the way his partner Moon Ribos can. The dancer-choreographer has had an internet-enabled vibrating doo-dad fitted in her left arm which, when she’s dancing, tells her when and how vigorously to respond to earthquakes.

Harbisson meanwhile is stuck in radio studios and behind lecterns explaining what it’s like to have a friends send the colours of Australian sunset to the back of his skull — to which a radio talk-show guest objects: Wouldn’t receiving a postcard of an Australian sunset amount to the same thing?

Born’s uncritical approach to her subject never really digs in to this perfectly sensible question — and this is a pity. Harbisson says he has weathered months-long headaches and episodes of depression in an effort to extend his senses, but all outsiders ever care about is the tech, and what it can do.

One recent wheeze from Harbisson and his collaborators is a headband that tells you the time by heating spots on your skull. Obviously a watch offers a more accurate measure. Less obviously, the headband is supposed to create a new sense in the wearer: an embodied, pre-conscious awareness of solar-planetary motion. The technology is fun, but what really matters is what new senses may be out there for us to enjoy.

I find it slighly irksome to be having to explain Harbisson’s work, since Harbisson hardly bothers. The lecture, the talk-show, the panel and the photoshoot are his gallery and stage, and for over twenty years now, the man with the stalk coming out of his head has been giving his audience what they have come to expect: a ringing endorsement of transhumanism, the philosophy that would have us treat our bodies as so much malleable meat. In 2010 he co-founded the Cyborg Foundation to defend cyborg rights. In 2017, he co-founded the Transpecies Society to give a voice to people with non-human identities. It’s all very idealistic and also quite endearingly old-fashioned in its otherworldliness — as though the plasticity or otherwise of the body were not already a burning social issue, and staple ordnance in today’s culture wars.

I wish Born had gone to the bother of challenging her subject. Penetrate their shell of schooled narcissism and you occasionally find that conceptual artists have something to say.

“This stretch-induced feeling of awe activates our brain’s spiritual zones”

Reading Angus Fletcher’s Wonderworks: Literary invention and the science of stories for New Scientist, 1 September 2021

Can science explain art?

Certainly: in 1999 the British neurobiologist Semir Zeki published Inner Vision, an illuminating account of how, through trial and error and intuition, different schools of art have succeeded in mapping the neurological architectures of human vision. (Put crudely, Rembrandt tickles one corner of the brain, Piet Mondrian another.)

Twelve years later, Oliver Sacks contributed to an already crowded music psychology shelf with Musicophilia, a collection of true tales in which neurological injuries and diseases are successfully treated with music.

Angus Fletcher believes the time has come for drama, fiction and literature generally to succumb to neurological explanation. Over the past decade, neuroscientists have been using pulse monitors, eye-trackers, brain scanners “and other gadgets” to look inside our heads as we consume novels, poems, films, and comic books. They must have come up with some insights by now.

Fletcher’s hypothesis is that story is a technology, which he defines as “any human-made thing that helps to solve a problem”.

This technology has evolved, over at least the last 4000 years, to help us negotiate the human condition, by which Fletcher means our awareness of our own mortality, and the creeping sense of futility it engenders. Story is “an invention for overcoming the doubt and the pain of just being us”.

Wonderworks is a scientific history of literature; each of its 25 chapters identifies a narrative “tool” which triggers a different, traceable, evidenced neurological outcome. Each tool comes with a goofy label: here you will encounter Butterfly Immersers and Stress Transformers, Humanity Connectors and Gratitude Multipliers.

Don’t sneer: these tools have been proven “to alleviate depression, reduce anxiety, sharpen intelligence, increase mental energy, kindle creativity, inspire confidence, and enrich our days with myriad other psychological benefits.”

Now, you may well object that, just as area V1 of the visual cortex did not evolve so we could appreciate the paintings of Piet Mondrian, so our capacity for horror and pity didn’t arise just so we could appreciate Shakespeare. So if story is merely “holding a mirror up to nature”, then Fletcher’s long, engrossing book wouldn’t really be saying anything.

As any writer will tell you, of course, a story isn’t merely a mirror. The problem comes when you try and make this perfectly legitimate point using neuroscience.

Too often for comfort, and as the demands of concision exceed all human bounds, the reader will encounter passages like: “This stretch-induced feeling of awe activates our brain’s spiritual zones, enriching our consciousness with the sensation of meanings beyond.”

Hitting sentences like this, I normally shut the book, with some force. I stayed my hand on this occasion because, by the time this horror came to light, two things were apparent. First, Fletcher — a neuroscientist turned story analyst — actually does know his neurobiology. Second, he really does know his literature, making Wonderworks a profound and useful guide to reading for pleasure.

Wonderworks fails as popular science because of the extreme parsimony of Fletcher’s explanations; fixing this problem would, however, have involved composing a multi-part work, and lost him his general audience.

The first person through the door is the one who invariably gets shot. Wonderworks is in many respects a pug-ugly book. But it’s also the first of its kind: an intelligent, engaged, erudite attempt to tackle, neurologically, not just some abstract and simplified “story”, but some the world’s greatest literature, from the Iliad to The Dream of the Red Chamber, from Disney’s Up to the novels of Elena Ferrante.

It is easy to get annoyed with this book. But those who stay calm will reap a rich harvest.

Snowflake science

Watching Noah Hutton’s documentary In Silico for New Scientist, 19 May 2021

Shortly after he earned a neuroscience degree, young filmmaker Noah Hutton fell into the orbit of Henry Markram, an Israeli neuroscientist based at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland.

Markram models brains, axon by axon, dendrite by dendrite, in all their biological and chemical complexity. His working assumption is that the brain is an organ, and so a good enough computer model of the brain ought to reveal its workings and pathologies, just as “in silico” models of the kidneys, spleen, liver and heart have enriched our understanding of those organs.

Markram’s son Kai has autism, so Markram has skin in this game. Much as we might want to improve the condition of people like Kai, no one is going to dig about in a living human brain to see if there are handy switches we can throw. Markram hopes a computer model will offer an ethically acceptable route to understanding how brains go wrong.

So far, so reasonable. Only in 2005, Henry Markram said he would build a working computer model of the human brain in 10 years.

Hutton has interviewed Markram, his colleagues and his critics, every year for well over a decade, as the project expanded and the deadline shifted. Markram’s vision transfixed purseholders across the European Union: in 2013 his Blue Brain Project won a billion Euros of public funding to create the Human Brain Project in Geneva.

And though his tenure did not last long, Markram is hardly the first founder to be wrested from the controls of his own institute, and he won’t be the last. There have been notable departures, but his Blue Brain Project endures, still working, still modelling: its in silico model of the mouse neocortex is astounding to look at.

Perhaps that is the problem. The Human Brain Project has become, says Hutton, a special-effects house, a shrine to touch-screens, curve-screens, headsets, but lacking any meaning to anything and anyone “outside this glass and steel building in Geneva”.

We’ve heard criticisms like this before. What about the way the Large Hadron Collider at CERN sucks funding from the rest of physics? You don’t have to scratch too deeply in academia to find a disgruntled junior researcher who’ll blame CERN for their failed grant application.

CERN, however, gets results. The Human Brain Project? Not so much.

The problem is philosophical. It is certainly within our power to model some organs. The brain, however, is not an organ in the usual sense. It is, by any engineering measure, furiously inefficient. Take a look: a spike in the dentrites releases this neurotransmitter, except when it releases that neurotransmitter, except when it does nothing at all. Signals follow this route, except when they follow that route, except when they vanish. Brains may look alike, and there’s surely some commonality in their working. At the level of the axon, however, every brain behaves like a beautiful and unique snowflake.

The Blue Brain Project’s models generate noise, just like regular brains. Someone talks vaguely about “emergent properties” — an intellectual Get Out of Jail Free card if ever there was one. But since no-one knows what this noise means in a real brain, there’s no earthly way to tell if Project’s model is making the right kind of noise.

The Salk Institute’s Terrence Sejnowski reckons the whole caper’s a bad joke; if successful Markram will only generate a simulation “every bit as mysterious as the brain itself”.

Hutton accompanies us down the yawning gulf between what Markram may reasonably achieve, and the fantasies he seems quite happy to stoke in order to maintain his funding. It’s a film made on a budget of nothing, over years, and it’s not pretty. But Hutton (whose very smart sf satire Lapsis came out in the US last month) makes up for all that with the sharpest of scripts. In Silico is a labour of love, rather more productive, I fear, than Markram’s own.

Cog ergo sum

Reading Matthew Cobb’s The Idea of the Brain for New Scientist 15 April 2020

Ask a passer-by in 2nd-century Rome where consciousness resided — in the heart or in the head — and he was sure to say, in the heart. The surgeon-philosopher Galen of Pergamon had other ideas. During one show he had someone press upon the exposed brain of a pig, which promptly (and mercifully) passed out. Letting go brought the pig back to consciousness.

Is the brain one organ, or many? Are our mental faculties localised in the brain? 1600 years after, Galen a Parisian gentleman tried to blow his brains out with a pistol. Instead he shot away his frontal bone, while leaving the anterior lobes of his brain bare but undamaged. He was rushed to the Hôpital St. Louis, where Ernest Aubertin spent a few vain hours trying to save his life. Aubertin discovered that if he pressed a spatula on the patient’s brain while he was speaking, his speech “was suddenly suspended; a word begun was cut in two. Speech returned as soon as pressure was removed,” Aubertin reported.

Does the brain contain all we are? Eighty years after Aubertin, Montreal neurosurgeon Wilder Penfield was carrying out hundreds of brain operations to relieve chronic temporal-lobe epilepsy. Using delicate electrodes, he would map the safest cuts to make — ones that would not excise vital brain functions. For the patient, the tiniest regions, when stimulated, accessed the strangest experiences. A piano being played. A telephone conversation between two family members. A man and a dog walking along a road. They weren’t memories, so much as dreamlike glimpses of another world.

Cobb’s history of brain science will fascinate readers quite as much as it occasionally horrifies. Cobb, a zoologist by training, has focused for much of his career on the sense of smell and the neurology of the humble fruit fly maggot. The Idea of the Brain sees him coming up for air, taking in the big picture before diving once again into the minutiae of his profession.

He makes a hell of a splash, too, explaining how the analogies we use to describe the brain both enrich our understanding of that mysterious organ, and hamstring our further progress. He shows how mechanical metaphors for brain function lasted well into the era of electricity. And he explains why computational metaphors, though unimaginably more fertile, are now throttling his science.

Study the brain as though it were a machine and in the end (and after much good work) you will run into three kinds of trouble.

First you will find that reverse engineering very complex systems is impossible. In 2017 two neuroscientists, Eric Jonas and Konrad Paul Kording employed the techniques they normally used to analyse the brain to study the Mos 6507 processor — a chip found in computers from the late 1970s and early 1980s that enabled machines to run video games such as Donkey Kong, Space Invaders or Pitfall. Despite their powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works, they admitted that their study fell short of producing “a meaningful understanding”.

Another problem is the way the meanings of technical terms expand over time, warping the way we think about a subject. The French neuroscientist Romain Brette has a particular hatred for that staple of neuroscience, “coding”, an term first invoked by Adrian in the 1920s in a technical sense, in which there is a link between a stimulus and the activity of the neuron. Today almost everybody think of neural codes as representions of that stimulus, which is a real problem, because it implies that there must be an ideal observer or reader within the brain, watching and interpreting those representations. It may be better to think of the brain as constructing information, rather than simply representing it — only we have no idea (yet) how such an organ would function. For sure, it wouldn’t be a computer.

Which brings us neatly to our third and final obstacle to understanding the brain: we take far too much comfort and encouragement from our own metaphors. Do recent advances in AI bring us closer to understanding how our brains work? Cobb’s hollow laughter is all but audible. “My view is that it will probably take fifty years before we understand the maggot brain,” he writes.

One last history lesson. In the 1970s, twenty years after Penfield electrostimulation studies, Michael Gazzaniga, a cognitive neuroscientist at the University of California, Santa Barbara, studied the experiences of people whose brains had been split down the middle in a desperate effort to control their epilepsy. He discovered that each half of the brain was, on its own, sufficient to produce a mind, albeit with slightly different abilities and outlooks in each half. “From one mind, you had two,” Cobb remarks. “Try that with a computer.”

Hearing the news brought veteran psychologist William Estes to despair: “Great,” he snapped, “now we have two things we don’t understand.”

All fall down

Talking to Scott Grafton about his book Physical Intelligence (Pantheon), 10 March 2020.

“We didn’t emerge as a species sitting around.”

So says University of California neuroscientist Scott Grafton in the introduction to his provoking new book Physical Intelligence. In it, Grafton assembles and explores all the neurological abilities that we take for granted — “simple” skills that in truth can only be acquired with time, effort and practice. Perceiving the world in three dimensions is one such skill; so is steadily carrying a cup of tea.

At UCLA, Grafton began his career mapping brain activity using positron emission tomography, to see how the brain learns new motor skills and recovers from injury or neurodegeneration. After a career developing new scanning techniques, and a lifetime’s walking, wild camping and climbing, Grafton believes he’s able to trace the neural architectures behind so-called “goal-directed behavior” — the business of how we represent and act physically in the world.

Grafton is interested in all those situations where “smart talk, texting, virtual goggles, reading, and rationalizing won’t get the job done” — those moments when the body accomplishes a complex task without much, if any, conscious intervention.. A good example might be bagging groceries. Suppose you are packing six different items into two bags. There are 720 possible ways to do this, and — assuming that like most people you want heavy items on the bottom, fragile items on the top, and cold items together — more than 700 of the possible solutions are wrong. And yet we almost always pack things so they don’t break or spoil, and we almost never have to agonise over the countless micro-decisions required to get the job done.

The grocery-bagging example is trivial, but often, what’s at stake in a task is much more serious — crossing the road, for example — and sometimes the experience required to accomplish it is much harder to come by. A keen hiker and scrambler, Grafton studs his book with first-hand accounts, at one point recalling how someone peeled off the side of a snow bank in front of him, in what escalated rapidly into a ghastly climbing accident. “At the spot where he fell,” he writes, “all I could think was how senseless his mistake had been. It was a steep section but entirely manageable. Knowing just a little bit more about how to use his ice axe, he could have readily stopped himself.”

To acquire experience, we have to have experiences. To acquire life-saving skills, we have to risk our lives. The temptation, now that we live most of our lives in urban comfort, is to create a world safe enough that we don’t need expose ourselves to such risks, or acquire such skills.

But this, Grafton tells me, when we speak on the phone, would be a big mistake. “If all you ever are walking on is a smooth, nice sidewalk, the only thing you can be graceful on is that sidewalk, and nothing else,” he explains. “And that sets you up for a fall.”

He means this literally: “The number one reason people are in emergency rooms is from what emergency rooms call ‘ground-level falls’. I’ve seen statistics which show that more and more of us are falling over for no very good reason. Not because we’re dizzy. Not because we’re weak. But because we’re inept. ”

For more than 1.3 million years of evolutionary time, hominids have lived without pavements or chairs, handling an uneven and often unpredictable environment. We evolved to handle a complex world, and a certain amount of constant risk. “Very enriched physical problem solving, which requires a lot of understanding of physical relationships, a lot of motor control, and some deftness in putting all those understandings together — all the while being constantly challenged by new situations — I believe this is really what drives brain networks towards better health,” Grafton says.

Our chat turns speculative. The more we removed risks and challenges from our everyday environment, Grafton suggests, the more we’re likely to want to complicate and add problems to the environment, to create challenges for ourselves that require the acquisition of unusual motor skills. Might this be a major driver behind cultural activities like music-making, craft and dance?

Speculation is one thing; serious findings are another. At the moment, Grafton is gathering medical and social data to support an anecdotal observation of his: that the experience of walking in the wild not only improves our motor abilities, but also promotes our mental health.

“A friend of mine runs a wilderness programme in the Sierra Nevada for at-risk teenagers,” he explains, “and one of the things he does is to teach them how to get by for a day or two in the wilderness, on their own. It’s life-transforming. They come out of there owning their choices and their behaviour. Essentially, they’ve grown up.”

Elements of surprise

Reading Vera Tobin’s Elements of Surprise for New Scientist, 5 May 2018

How do characters and events in fiction differ from those in real life? And what is it about our experience of life that fiction exaggerates, omits or captures to achieve its effects?

Effective fiction is Vera Tobin’s subject. And as a cognitive scientist, she knows how pervasive and seductive it can be, even in – or perhaps especially in – the controlled environment of an experimental psychology lab.

Suppose, for instance, you want to know which parts of the brain are active when forming moral judgements, or reasoning about false beliefs. These fields and others rest on fMRI brain scans. Volunteers receive short story prompts with information about outcomes or character intentions and, while their brains are scanned, have to judge what other characters ought to know or do.

“As a consequence,” writes Tobin in her new book Elements of Surprise, “much research that is putatively about how people think about other humans… tells us just as much, if not more, about how study participants think about characters in constructed narratives.”

Tobin is weary of economists banging on about the “flaws” in our cognitive apparatus. “The science on this phenomenon has tended to focus on cataloguing errors people make in solving problems or making decisions,” writes Tobin, “but… its place and status in storytelling, sense-making, and aesthetic pleasure deserve much more attention.”

Tobin shows how two major “flaws” in our thinking are in fact the necessary and desirable consequence of our capacity for social interaction. First, we wildly underestimate our differences. We model each other in our heads and have to assume this model is accurate, even while we’re revising it, moment to moment. At the same time, we have to assume no one else has any problem performing this task – which is why we’re continually mortified to discover other people have no idea who we really are.

Similarly, we find it hard to model the mental states of people, including our past selves, who know less about something than we do. This is largely because we forget how we came to that privileged knowledge.

“Tobin is weary of economists banging on about the ‘flaws’ in our cognitive apparatus”
There are implications for autism, too. It is, Tobin says, unlikely that many people with autism “lack” an understanding that others think differently – known as “theory of mind”. It is more likely they have difficulty inhibiting their knowledge when modelling others’ mental states.

And what about Emma, titular heroine of Jane Austen’s novel? She “is all too ready to presume that her intentions are unambiguous to others and has great difficulty imagining, once she has arrived at an interpretation of events, that others might believe something different”, says Tobin. Austen’s brilliance was to fashion a plot in which Emma experiences revelations that confront the consequences of her “cursed thinking” – a cognitive bias making us assume any person with whom we communicate has the background knowledge to understand what is being said.

Just as we assume others know what we’re thinking, we assume our past selves thought as we do now. Detective stories exploit this foible. Mildred Pierce, Michael Curtiz’s 1945 film, begins at the end, as it were, depicting the story’s climactic murder. We are fairly certain we know who did it, but we flashback to the past and work forward to the present only to find that we have misinterpreted everything.

I confess I was underwhelmed on finishing this excellent book. But then I remembered Sherlock Holmes’s complaint (mentioned by Tobin) that once he reveals the reasoning behind his deductions, people are no longer impressed by his singular skill. Tobin reveals valuable truths about the stories we tell to entertain each other, and those we tell ourselves to get by, and how they are related. Like any good magic trick, it is obvious once it has been explained.