Elements of surprise

Reading Vera Tobin’s Elements of Surprise for New Scientist, 5 May 2018

How do characters and events in fiction differ from those in real life? And what is it about our experience of life that fiction exaggerates, omits or captures to achieve its effects?

Effective fiction is Vera Tobin’s subject. And as a cognitive scientist, she knows how pervasive and seductive it can be, even in – or perhaps especially in – the controlled environment of an experimental psychology lab.

Suppose, for instance, you want to know which parts of the brain are active when forming moral judgements, or reasoning about false beliefs. These fields and others rest on fMRI brain scans. Volunteers receive short story prompts with information about outcomes or character intentions and, while their brains are scanned, have to judge what other characters ought to know or do.

“As a consequence,” writes Tobin in her new book Elements of Surprise, “much research that is putatively about how people think about other humans… tells us just as much, if not more, about how study participants think about characters in constructed narratives.”

Tobin is weary of economists banging on about the “flaws” in our cognitive apparatus. “The science on this phenomenon has tended to focus on cataloguing errors people make in solving problems or making decisions,” writes Tobin, “but… its place and status in storytelling, sense-making, and aesthetic pleasure deserve much more attention.”

Tobin shows how two major “flaws” in our thinking are in fact the necessary and desirable consequence of our capacity for social interaction. First, we wildly underestimate our differences. We model each other in our heads and have to assume this model is accurate, even while we’re revising it, moment to moment. At the same time, we have to assume no one else has any problem performing this task – which is why we’re continually mortified to discover other people have no idea who we really are.

Similarly, we find it hard to model the mental states of people, including our past selves, who know less about something than we do. This is largely because we forget how we came to that privileged knowledge.

“Tobin is weary of economists banging on about the ‘flaws’ in our cognitive apparatus”
There are implications for autism, too. It is, Tobin says, unlikely that many people with autism “lack” an understanding that others think differently – known as “theory of mind”. It is more likely they have difficulty inhibiting their knowledge when modelling others’ mental states.

And what about Emma, titular heroine of Jane Austen’s novel? She “is all too ready to presume that her intentions are unambiguous to others and has great difficulty imagining, once she has arrived at an interpretation of events, that others might believe something different”, says Tobin. Austen’s brilliance was to fashion a plot in which Emma experiences revelations that confront the consequences of her “cursed thinking” – a cognitive bias making us assume any person with whom we communicate has the background knowledge to understand what is being said.

Just as we assume others know what we’re thinking, we assume our past selves thought as we do now. Detective stories exploit this foible. Mildred Pierce, Michael Curtiz’s 1945 film, begins at the end, as it were, depicting the story’s climactic murder. We are fairly certain we know who did it, but we flashback to the past and work forward to the present only to find that we have misinterpreted everything.

I confess I was underwhelmed on finishing this excellent book. But then I remembered Sherlock Holmes’s complaint (mentioned by Tobin) that once he reveals the reasoning behind his deductions, people are no longer impressed by his singular skill. Tobin reveals valuable truths about the stories we tell to entertain each other, and those we tell ourselves to get by, and how they are related. Like any good magic trick, it is obvious once it has been explained.

The Usefulness of Useless Knowledge

Reading The Usefulness of Useless Knowledge by Abraham Flexner, and Knowledge for Sale: The neoliberal takeover of higher education by Lawrence Busch for New Scientist, 17 March 2017

 

IN 1930, the US educator Abraham Flexner set up the Institute for Advanced Study, an independent research centre in Princeton, New Jersey, where leading lights as diverse as Albert Einstein and T. S. Eliot could pursue their studies, free from everyday pressures.

For Flexner, the world was richer than the imagination could conceive and wider than ambition could encompass. The universe was full of gifts and this was why pure, “blue sky” research could not help but turn up practical results now and again, of a sort quite impossible to plan for.

So, in his 1939 essay “The usefulness of useless knowledge”, Flexner listed a few of the practical gains that have sprung from what we might, with care, term scholastic noodling. Electromagnetism was his favourite. We might add quantum physics.

Even as his institute opened its doors, the world’s biggest planned economy, the Soviet Union, was conducting a grand and opposite experiment, harnessing all the sciences for their immediate utility and problem-solving ability.

During the cold war, the vast majority of Soviet scientists were reduced to mediocrity, given only sharply defined engineering problems to solve. Flexner’s better-known affiliates, meanwhile, garnered reputations akin to those enjoyed by other mascots of Western intellectual liberty: abstract-expressionist artists and jazz musicians.

At a time when academia is once again under pressure to account for itself, the Princeton University Press reprint of Flexner’s essay is timely. Its preface, however, is another matter. Written by current institute director Robbert Dijkgraaf, it exposes our utterly instrumental times. For example, he employs junk metrics such as “more than half of all economic growth comes from innovation”. What for Flexner was a rather sardonic nod to the bottom line, has become for Dijkgraaf the entire argument – as though “pure research” simply meant “long-term investment”, and civic support came not from existential confidence and intellectual curiosity, but from scientists “sharing the latest discoveries and personal stories”. So much for escaping quotidian demands.

We do not know what the tightening of funding for scientific research that has taken place over the past 40 years would have done for Flexner’s own sense of noblesse oblige. But this we can be sure of: utilitarian approaches to higher education are dominant now, to the point of monopoly. The administrative burdens and stultifying oversight structures throttling today’s scholars come not from Soviet-style central planning, but from the application of market principles – an irony that the sociologist Lawrence Busch explores exhaustively in his monograph Knowledge for Sale.

Busch explains how the first neo-liberal thinkers sought to prevent the rise of totalitarian regimes by replacing governance with markets. Those thinkers believed that markets were safer than governments because they were cybernetic and so corrected themselves. Right?

Wrong: Busch provides ghastly disproofs of this neo-liberal vision from within the hall of academe, from bad habits such as a focus on counting citations and publication output, through fraud, to existential crises such as the shift in the ideal of education from a public to a private good. But if our ingenious, post-war market solution to the totalitarian nightmare of the 1940s has itself turned out to be a great vampire squid wrapped around the face of humanity (as journalist Matt Taibbi once described investment bank Goldman Sachs), where have we left to go?

Flexner’s solution requires from us a confidence that is hard to muster right now. We have to remember that the point of study is not to power, enable, de-glitch or otherwise save civilisation. The point of study is to create a civilisation worth saving.

A feast of bad ideas

This Idea Must Die: Scientific theories that are blocking progress, edited by John Brockman (Harper Perennial)

for New Scientist, 10 March 2015

THE physicist Max Planck had a bleak view of scientific progress. “A new scientific truth does not triumph by convincing its opponents…” he wrote, “but rather because its opponents eventually die.”

This is the assumption behind This Idea Must Die, the latest collection of replies to the annual question posed by impresario John Brockman on his stimulating and by now venerable online forum, Edge. The question is: which bits of science do we want to bury? Which ideas hold us back, trip us up or send us off in a futile direction?

Some ideas cited in the book are so annoying that we would be better off without them, even though they are true. Take “brain plasticity”. This was a real thing once upon a time, but the phrase spread promiscuously into so many corners of neuroscience that no one really knows what it means any more.

More than any amount of pontification (and readers wouldn’t believe how many new books agonise over what “science” was, is, or could be), Brockman’s posse capture the essence of modern enquiry. They show where it falls away into confusion (the use of cause-and-effect thinking in evolution), into religiosity (virtually everything to do with consciousness) and cant (for example, measuring nuclear risks with arbitrary yardsticks).

This is a book to argue with – even to throw against the wall at times. Several answers, cogent in themselves, still hit nerves. When Kurt Gray and Richard Dawkins, for instance, stick their knives into categorisation, I was left wondering whether scholastic hand-waving would really be an improvement. And Malthusian ideas about resources inevitably generate more heat than light when harnessed to the very different agendas of Matt Ridley and Andrian Kreye.

On the other hand, there is pleasure in seeing thinkers forced to express themselves in just a few hundred words. I carry no flag for futurist Douglas Rushkoff or psychologist Susan Blackmore, but how good to be wrong-footed. Their contributions are among the strongest, with Rushkoff discussing godlessness and Blackmore on the relationship between brain and consciousness.

Every reader will have a favourite. Mine is palaeontologist Julia Clarke’s plea that people stop asking her where feathered dinosaurs leave off and birds begin. Clarke offers lucid glimpses of the complexities and ambiguities inherent in deciphering the behaviour of long-vanished animals from thin fossil data. The next person to ask about the first bird will probably get a cake fork in their eye.

This Idea Must Die is garrulous and argumentative. I expected no less: Brockman’s formula is tried and tested. Better still, it shows no sign of getting old.

 

Maths into English

One to Nine by Andrew Hodges and The Tiger that Isn’t by Michael Blastland and Andrew Dilnot
reviewed for the Telegraph, 22 September 2007

Twenty-four years have passed since Andrew Hodges published his biography of the mathematician Alan Turing. Hodges, a long-term member of the Mathematical Physics Research Group at Oxford, has spent the years since exploring the “twistor geometry” developed by Roger Penrose, writing music and dabbling with self-promotion.

Follow the link to One to Nine’s web page, and you will soon be stumbling over the furniture of Hodges’s other lives: his music, his sexuality, his ambitions for his self?published novel – the usual spillage. He must be immune to bathos, or blind to it. But why should he care what other people think? He knows full well that, once put in the right order, these base metals will be transformed.

“Writing,” says Hodges, “is the business of turning multi?dimensional facts and ideas into a one?dimensional string of symbols.”

One to Nine – ostensibly a simple snapshot of the mathematical world – is a virtuoso stream of consciousness containing everything important there is to say about numbers (and Vaughan Williams, and climate change, and the Pet Shop Boys) in just over 300 pages. It contains multitudes. It is cogent, charming and deeply personal, all at once.

“Dense” does not begin to describe it. There is extraordinary concision at work. Hodges covers colour space and colour perception in two or three pages. The exponential constant e requires four pages. These examples come from the extreme shallow end of the mathematical pool: there are depths here not everyone will fathom. But this is the point: One to Nine makes the unfathomable enticing and gives the reader tremendous motivation to explore further.

This is a consciously old-fashioned conceit. One to Nine is modelled on Constance Reid’s 1956 classic, From Zero to Infinity. Like Reid’s, each of Hodges’s chapters explores the ideas associated with a given number. Mathematicians are quiet iconoclasts, so this is work that each generation must do for itself.

When Hodges considers his own contributions (in particular, to the mathematics underpinning physical reality), the skin tightens over the skull: “The scientific record of the past century suggests that this chapter will soon look like faded pages from Eddington,” he writes. (Towards the end of his life, Sir Arthur Eddington, who died in 1944, assayed a “theory of everything”. Experimental evidence ran counter to his work, which today generates only intermittent interest.)

But then, mathematics “does not have much to do with optimising personal profit or pleasure as commonly understood”.

The mordant register of his prose serves Hodges as well as it served Turing all those years ago. Like Turing: the Enigma, One to Nine proceeds, by subtle indirection, to express a man through his numbers.

If you think organisations, economies or nations would be more suited to mathematical description, think again. Michael Blastland and Andrew Dilnot’s The Tiger that Isn’t contains this description of the International Passenger Survey, the organisation responsible for producing many of our immigration figures:

The ferry heaves into its journey and, equipped with their passenger vignettes, the survey team members also set off, like Attenboroughs in the undergrowth, to track down their prey, and hope they all speak English. And so the tides of people swilling about the world?… are captured for the record if they travel by sea, when skulking by slot machines, half?way through a croissant, or off to the ladies’ loo.

Their point is this: in the real world, counting is back-breaking labour. Those who sieve the world for numbers – surveyors, clinicians, statisticians and the rest – are engaged in difficult work, and the authors think it nothing short of criminal the way the rest of us misinterpret, misuse or simply ignore their hard-won results. This is a very angry and very funny book.

The authors have worked together before, on the series More or Less – BBC Radio 4’s antidote to the sort of bad mathematics that mars personal decision-making, political debate, most press releases, and not a few items from the corporation’s own news schedule.

Confusion between correlation and cause, wild errors in the estimation of risk, the misuse of averages: Blastland and Dilnot round up and dispatch whole categories of woolly thinking.

They have a positive agenda. A handful of very obvious mathematical ideas – ideas they claim (with a certain insouciance) are entirely intuitive – are all we need to wield the numbers for ourselves; with them, we will be better informed, and will make more realistic decisions.

This is one of those maths books that claims to be self?help, and on the evidence presented here, we are in dire need of it. A late chapter contains the results of a general knowledge quiz given to senior civil servants in 2005.

The questions were simple enough. Among them: what share of UK income tax is paid by the top one per cent of earners? For the record, in 2005 it was 21 per cent. Our policy?makers didn’t have a clue.

“The deepest pitfall with numbers owes nothing to the numbers themselves and much to the slack way they are treated, with carelessness all the way to contempt.”

This jolly airport read will not change all that. But it should stir things up a bit.