The Usefulness of Useless Knowledge

Reading The Usefulness of Useless Knowledge by Abraham Flexner, and Knowledge for Sale: The neoliberal takeover of higher education by Lawrence Busch for New Scientist, 17 March 2017

 

IN 1930, the US educator Abraham Flexner set up the Institute for Advanced Study, an independent research centre in Princeton, New Jersey, where leading lights as diverse as Albert Einstein and T. S. Eliot could pursue their studies, free from everyday pressures.

For Flexner, the world was richer than the imagination could conceive and wider than ambition could encompass. The universe was full of gifts and this was why pure, “blue sky” research could not help but turn up practical results now and again, of a sort quite impossible to plan for.

So, in his 1939 essay “The usefulness of useless knowledge”, Flexner listed a few of the practical gains that have sprung from what we might, with care, term scholastic noodling. Electromagnetism was his favourite. We might add quantum physics.

Even as his institute opened its doors, the world’s biggest planned economy, the Soviet Union, was conducting a grand and opposite experiment, harnessing all the sciences for their immediate utility and problem-solving ability.

During the cold war, the vast majority of Soviet scientists were reduced to mediocrity, given only sharply defined engineering problems to solve. Flexner’s better-known affiliates, meanwhile, garnered reputations akin to those enjoyed by other mascots of Western intellectual liberty: abstract-expressionist artists and jazz musicians.

At a time when academia is once again under pressure to account for itself, the Princeton University Press reprint of Flexner’s essay is timely. Its preface, however, is another matter. Written by current institute director Robbert Dijkgraaf, it exposes our utterly instrumental times. For example, he employs junk metrics such as “more than half of all economic growth comes from innovation”. What for Flexner was a rather sardonic nod to the bottom line, has become for Dijkgraaf the entire argument – as though “pure research” simply meant “long-term investment”, and civic support came not from existential confidence and intellectual curiosity, but from scientists “sharing the latest discoveries and personal stories”. So much for escaping quotidian demands.

We do not know what the tightening of funding for scientific research that has taken place over the past 40 years would have done for Flexner’s own sense of noblesse oblige. But this we can be sure of: utilitarian approaches to higher education are dominant now, to the point of monopoly. The administrative burdens and stultifying oversight structures throttling today’s scholars come not from Soviet-style central planning, but from the application of market principles – an irony that the sociologist Lawrence Busch explores exhaustively in his monograph Knowledge for Sale.

Busch explains how the first neo-liberal thinkers sought to prevent the rise of totalitarian regimes by replacing governance with markets. Those thinkers believed that markets were safer than governments because they were cybernetic and so corrected themselves. Right?

Wrong: Busch provides ghastly disproofs of this neo-liberal vision from within the hall of academe, from bad habits such as a focus on counting citations and publication output, through fraud, to existential crises such as the shift in the ideal of education from a public to a private good. But if our ingenious, post-war market solution to the totalitarian nightmare of the 1940s has itself turned out to be a great vampire squid wrapped around the face of humanity (as journalist Matt Taibbi once described investment bank Goldman Sachs), where have we left to go?

Flexner’s solution requires from us a confidence that is hard to muster right now. We have to remember that the point of study is not to power, enable, de-glitch or otherwise save civilisation. The point of study is to create a civilisation worth saving.

How we went from mere betting to gaming the world

Reviewing The Perfect Bet: How science and maths are taking the luck out of gambling by Adam Kucharski, for The Spectator, 7 May 2016.

If I prang your car, we can swap insurance details. In the past, it would have been necessary for you to kill me. That’s the great thing about money: it makes liabilities payable, and blood feud unnecessary.

Spare a thought, then, for the economist Robin Hanson whose idea it was, in the years following the World Trade Center attacks, to create a market where traders could speculate on political atrocities. You could invest in the likelihood of a biochemical attack, for example, or a coup d’etat, or the assassination of an Arab leader. The more knowledgeable you were, the more profit you would earn — but you would also be showing your hand to the Pentagon.

The US Senate responded with horror to this putative “market in death and destruction”, though if the recent BBC drama The Night Manager has taught us anything at all (beyond the passing fashionability of tomato-red chinos), it is that there is already a global market in death and destruction, and it is not at all well-abstracted. Its currency is lives and livelihoods. Its currency is blood. A little more abstraction, in this grim sphere, would be welcome.

Most books about money stop here, arrested — whether they admit it or not, in the park’n’ride zone of Francis Fukuyama’s 1989 essay “The End of History?” Adam Kucharski — a mathematician who lectures at the London School of Hygiene and Tropical Medicine — keeps his foot on the gas. The point of his book is that abstraction makes speculation, not just possible, but essential. Gambling isn’t any kind of “underside” to the legitimate economy. It is the economy’s entire basis, and “the line between luck and skill — and between gambling and investing — is rarely as clear as we think.” (204)

When we don’t know everything, we have to speculate to progress. Speculation is by definition an insecure business, so we put a great deal of effort into knowing everything. The hope is that, the more cards we count, and the more attention we pay to the spin of the wheel, the more accurate our bets will become. This is the meat of Kucharski’s book, and occasions tremendous, spirited accounts of observational, mathematical, and computational derring-do among the blackjack and roulette tables of Las Vegas and Monte Carlo. On one level, The Perfect Bet is a serviceable book about professional gambling.

When we come to the chapter on sports betting, however, the thin line between gambling and investment vanishes entirely, and Kucharski carries us into some strange territory indeed.

Lay a bet on a tennis match: “if one bookmaker is offering odds of 2.1 on Nadal and another is offering 2.1 on Djokovic, betting $100 on each player will net you $210 — and cost you $100 — whatever the result. Whoever wins, you walk away with a profit of $10.” (108) You don’t need to know anything about tennis. You don’t even need to know the result of the match.

Ten dollars is not a great deal of money, so these kinds of bets have to be made in bulk and at great speed to produce a healthy return. Which is where the robots come in: trading algorithms that — contrary to popular myth — are made simple (rarely running to more than ten lines of code) to keep them speedy. This is no small problem when you’re trying to automate the business of gaming the entire world. In 2013 — around the time the US Senate stumbled across Robin Hanson’s “policy market” idea, the S&P 500 stock index took a brief $136 billion dive when trading algorithms responded instantly to a malicious tweet claiming bombs had gone off in the White House.

The subtitle of Kucharski’s book states that “science and maths are taking the luck out of gambling”, and there’s little here to undercut the gloomy forecast. But Kucharski is also prosecuting a cleverer, more entertaining, and ultimately more disturbing line of argument. He is placing gambling at the heart of the body politic.

Risk reduction is every serious gambler’s avocation. The gambler is not there to take part. The gambler isn’t there to win. The gambler is there to find an edge, spot the tell, game the table, solve the market. The more parts, and the more interactions, the harder this is to do, but while it is true that the world is not simply deterministic, at a human scale, frankly, it might as well be.

In this smartphone-enabled and metadata-enriched world, complete knowledge of human affairs is becoming more or less possible. And imagine it: if we ever do crack our own markets, then the scope for individual action shrinks to a green zero. And we are done.

Eugenic America: how to exclude almost everyone

imbeciles

Imbeciles: The Supreme Court, American eugenics, and the sterilization of Carrie Buck by Adam Cohen (Penguin Press)

Defectives in the Land: Disability and immigration in the age of eugenics by Douglas C. Baynton (University of Chicago Press)

for New Scientist, 22 March 2016

ONE of 19th-century England’s last independent “gentleman scientists”, Francis Galton was the proud inventor of underwater reading glasses, an egg-timer-based speedometer for cyclists, and a self-tipping top hat. He was also an early advocate of eugenics, and his Hereditary Genius was published two years after the first part of Karl Marx’s Das Kapital.

Both books are about the betterment of the human race: Marx supposed environment was everything; Galton assumed the same for heredity. “If a twentieth part of the cost and pains were spent in measures for the improvement of the human race that is spent on the improvement of the breed of horses and cattle,” he wrote, “what a galaxy of genius might we not create! We might introduce prophets and high priests of civilisation into the world, as surely as we… propagate idiots by mating cretins.”

What would such a human breeding programme look like? Would it use education to promote couplings that produced genetically healthy offspring? Or would it discourage or prevent pairings that would otherwise spread disease or dysfunction? And would it work by persuasion or by compulsion?

The study of what was then called degeneracy fell to a New York social reformer, Richard Louis Dugdale. During an 1874 inspection of a jail in New York State, Dugdale learned that six of the prisoners there were related. He traced the Jukes family tree back six generations, and found that some 350 people related to this family by blood or marriage were criminals, prostitutes or destitute.

Dugdale concluded that, like genius, “degeneracy” runs in families, but his response was measured. “The licentious parent makes an example which greatly aids in fixing habits of debauchery in the child. The correction,” he wrote, “is change of the environment… Where the environment changes in youth, the characteristics of heredity may be measurably altered.”

Other reformers were not so circumspect. An Indiana reformatory promptly launched a eugenic sterilisation effort, and in 1907 Indiana enacted the world’s first compulsory sterilisation statute. California followed suit in 1909. Between 1927 and 1979, Virginia forcibly sterilised at least 7450 “unfit” people. One of them was Carrie Buck, a woman labelled feeble-minded and kept ignorant of the details of her own case right up to the point in October 1927 when her fallopian tubes were tied and cauterised using carbolic acid and alcohol.

In Imbeciles, Adam Cohen follows Carrie Buck through the US court system, past the desks of one legal celebrity after the other, and not one of them, not Howard Taft, not Louis Brandeis, not Oliver Wendell Holmes Jr, gave a damn about her.

Cohen anatomises in pitiless detail how inept civil society can be at assimilating scientific ideas. He also does a good job explaining why attempts to manipulate the genetic make-up of whole populations can only fail to improve the genetic health of our species. Eugenics fails because it looks for genetic solutions to what are essentially cultural problems. The anarchist biologist Peter Kropotkin made this point as far back as 1912. Who were unfit, he asked the first international eugenics congress in London: workers or monied idlers? Those who produced degenerates in slums or those who produced degenerates in palaces? Culture casts a huge influence over the way we live our lives, hopelessly complicating our measures of strength, fitness and success.

Readers of Cohen’s book would also do well to watch out for Douglas Baynton’s Defectives in the Land, to be published in June. Focusing on immigrant experiences in New York, Baynton explains how ideas about genetics, disability, race, family life and employment worked together to exclude an extraordinarily diverse range of men and women from the shores of the US.

“Doesn’t this squashy sentimentality of a big minority of our people about human life make you puke?” Holmes once exclaimed. Holmes was a miserable bigot, but he wasn’t wrong to thirst for more rigour in our public discourse. History is not kind to bad ideas.

How the forces inside cells actually behave

animal electricity

Animal Electricity: How we learned that the body and brain are electric machines by Robert B. Campenot (Harvard University Press) for New Scientist, 9 March 2016.

IF YOU stood at arm’s length from someone and each of you had 1 per cent more electrons than protons, the force pushing the two of you apart would be enough to lift a “weight” equal to that of the entire Earth.

This startling observation, from Richard Feynman’s Lectures on Physics, so impressed cell biologist Robert Campenot he based quite a peculiar career around it. Not content with the mechanical metaphors of molecular biology, Campenot has studied living tissue as a delicate and complex mechanism that thrives by tweaking tiny imbalances in electrical charge.

If only the book were better prepared. Campenot’s enthusiasm for Feynman has him repeat the anecdote about lifting the world almost word for word, in the preface and introduction. Duplicating material is a surprisingly easy gaffe for a writer, and it is why we have editors. Where were they?

Campenot’s generous account ranges from Galvani’s discovery of animal electricity to the development of thought-controlled prosthetic limbs. He has high regard for popular science. But his is the rather fussy appreciation of the academic outsider who, uncertain of the form’s aesthetic potential, praises it for its utility. “The value of popularising science should never be underestimated because it occasionally attracts the attention of people who go on to make major contributions.” The pantaloonish impression he makes here is not wholly unrepresentative of the book.

Again, one might wish Campenot’s relationship with his editor had been more creative. Popular science writing rarely handles electricity well, let alone ion channels and membrane potentials. So, when it comes to developing suitable metaphors, Campenot is thrown on his own resources. His metaphors are as effective as one could wish for, but they suffer from repetition. One imagines the author wondering if he has done enough to nail his point, but with no one to reassure him.

Faults aside, this is a good book. Its mix of schoolroom electricity and sophisticated cell biology is highly eccentric but this, I think, speaks much in Campenot’s favour. The way organic tissue manipulates electricity, sending signals in broad electrical waves that can extend up to a third of a metre, is a dimension of biology we have taken on trust, domesticating it behind high-order metaphors drawn from computer science. Consequently, we have been unable to visualise how the forces in our cells actually behave. This was bound to turn out an odd endeavour. So be it. The odder, the better, in fact.

Putting the wheel in its place

wheel

The Wheel: Inventions and reinventions by Richard W. Bulliet (Columbia University Press), for New Scientist, 20 January 2016

IN 1870, a year after the first rickshaws appeared in Japan, three inventors separately applied for exclusive rights. Already, there were too many workshops serving the burgeoning market.

We will never know which of them, if any, invented this internationally popular, stackable, hand-drawn passenger cart. Just three years after its invention, the rickshaw had totally displaced the palanquin (a covered litter carried on the shoulders of two bearers) as the preferred mode of passenger transport in Japan.

What made the rickshaw so different from a wagon or an ox-cart and, in the eyes of many Westerners, so cruel, was the idea of it being pulled by a man instead of a farm animal. Pushing wheelchairs and baby carriages posed no problem, but pulling turned a man into a beast. “This quirk of perception,” Bulliet says, “reflects a history of human-animal relations that the Japanese – who ate little red meat, had few large herds of cattle and horses, and seldom used animals to pull vehicles – did not share with Westerners.”

In answer to some questions that seem far more difficult, Bulliet provides extraordinarily precise answers. He proposes an exact birth for the wheel: the wheel-set design, whereby wheels are fixed to rotating axles, was invented for use on mine cars in copper mines in the Carpathian mountains, perhaps as early as 4000 BC.

Other questions remain intractable. Why did wheeled vehicles not catch on in pre-Columbian America? The peoples of North and South America did not use wheels for transportation before Christopher Columbus arrived. They made wheeled toys, though. Cattle-herding societies from Senegal to Kenya were not taken in by wheels either, though they were happy enough to feature the chariots of visitors in their rock paintings.

Bulliet has a lot of fun teasing generations of anthropologists, archaeologists and historians for whom the wheel has been a symbol of self-evident utility: how could those foreign types not get it? His answer is radical: the wheel is actually not that great an idea. It only really came into its own once John McAdam, a Scot born in 1756, introduced a superior way to build roads. It’s worth remembering that McAdam insisted the best way to manufacture the small, sharp-edged stones he needed was to have workers, including women and children, sit beside the road and break up larger rocks. So much for progress.

The wheel revolution is, to Bulliet’s mind, a recent and largely human-powered one. Bicycles, shopping carts, baby strollers, dollies, gurneys and roll-aboard luggage: none of these was conceived before 1800. At the dawn of Europe’s Renaissance, in the 14th century, four-wheeled vehicles were not in common use anywhere in the world.

Bulliet ends his history with the oddly conventional observation that “invention is seldom a simple matter of who thought of something first”. He could have challenged the modern shibboleth (born in Samuel Butler’s Erewhon and given mature expression in George Dyson’s Darwin Among the Machines) that technology evolves. Add energy to an unbounded system, and complexity is pretty much inevitable. There is nothing inevitable about technology, though; human agency cannot be ignored. Even a technology as ubiquitous as the wheel turns out to be a scrappy hostage to historical contingency.

I may be misrepresenting the author’s argument here. It is hard to tell, because Bulliet approaches the philosophy of technology quite gingerly. He can afford to release the soft pedal. This is a fascinating book, but we need more, Professor Bulliet!

 

 

 

More than human

 

mg22630211.000-1_500

For New Scientist: a review of Ian Tattersall’s The Strange Case of the Rickety Cossack, and other cautionary tales from human evolution

THE odd leg bones and prominent brow ridges of a fossil hominid found in Belgium in 1830 clearly belong to an ancient relative of Homo sapiens. But palaeontologist August Mayer wasn’t having that: what he saw were the remains of a man who had spent his life on horseback despite a severe case of rickets, furrowing his brow in agony as a consequence, who hid himself away to die under 2 metres of fossil-laden sediment.

The “Cossack” in Ian Tattersall’s new book, The Strange Case of the Rickety Cossack, exemplifies the risk of relying too much on the opinion of authorities and not enough on systematic analysis. Before they were bureaucratised and (where possible) automated, several sciences fell down that particular well.

Palaeoanthropology made repeated descents, creating a lot of entertaining clatter in the process. For example, Richard Leakey’s televised live spat with Donald Johanson over human origins in 1981 would be unimaginable today. I think Tattersall, emeritus curator at the American Museum of Natural History, secretly misses this heroic age of simmering feuds and monstrous egos.

The human fossil record ends with us. There are many kinds of lemur but, as he writes, only one kind of human, “intolerant of competition and uniquely able to eliminate it”. As a result, there is an immense temptation to see humans as the acme of an epic evolutionary project, and to downplay the diversity our genus once displayed.

Matters of theory rarely disturbed the 20th-century palaeontologists; they assigned species names to practically every fossil they found until biologist Ernst Mayr, wielding insights from genetics, stunned them into embarrassed silence. Today, however, our severely pruned evolutionary tree grows bushier with every molecular, genetic and epigenetic discovery.

Some claim the group of five quite distinct fossil individuals discovered in 1991 in Dmanisi, east of the Black Sea, belong to one species. Use your eyes, says Tattersall; around 2 million years ago, four different kinds of hominid shared that region.

Tattersall explains how epigenetic effects on key genes cascade to produce radical morphological changes in an eye blink, and why our unusual thinking style, far from being the perfected product of long-term selective pressures, was bootstrapped out of existing abilities barely 100,000 years ago.

He performs a difficult balancing act with aplomb, telling the story of human evolution through an accurate and unsparing narrative of what scientists actually thought and did. His humility and generosity are exemplary.

A feast of bad ideas

This Idea Must Die: Scientific theories that are blocking progress, edited by John Brockman (Harper Perennial)

for New Scientist, 10 March 2015

THE physicist Max Planck had a bleak view of scientific progress. “A new scientific truth does not triumph by convincing its opponents…” he wrote, “but rather because its opponents eventually die.”

This is the assumption behind This Idea Must Die, the latest collection of replies to the annual question posed by impresario John Brockman on his stimulating and by now venerable online forum, Edge. The question is: which bits of science do we want to bury? Which ideas hold us back, trip us up or send us off in a futile direction?

Some ideas cited in the book are so annoying that we would be better off without them, even though they are true. Take “brain plasticity”. This was a real thing once upon a time, but the phrase spread promiscuously into so many corners of neuroscience that no one really knows what it means any more.

More than any amount of pontification (and readers wouldn’t believe how many new books agonise over what “science” was, is, or could be), Brockman’s posse capture the essence of modern enquiry. They show where it falls away into confusion (the use of cause-and-effect thinking in evolution), into religiosity (virtually everything to do with consciousness) and cant (for example, measuring nuclear risks with arbitrary yardsticks).

This is a book to argue with – even to throw against the wall at times. Several answers, cogent in themselves, still hit nerves. When Kurt Gray and Richard Dawkins, for instance, stick their knives into categorisation, I was left wondering whether scholastic hand-waving would really be an improvement. And Malthusian ideas about resources inevitably generate more heat than light when harnessed to the very different agendas of Matt Ridley and Andrian Kreye.

On the other hand, there is pleasure in seeing thinkers forced to express themselves in just a few hundred words. I carry no flag for futurist Douglas Rushkoff or psychologist Susan Blackmore, but how good to be wrong-footed. Their contributions are among the strongest, with Rushkoff discussing godlessness and Blackmore on the relationship between brain and consciousness.

Every reader will have a favourite. Mine is palaeontologist Julia Clarke’s plea that people stop asking her where feathered dinosaurs leave off and birds begin. Clarke offers lucid glimpses of the complexities and ambiguities inherent in deciphering the behaviour of long-vanished animals from thin fossil data. The next person to ask about the first bird will probably get a cake fork in their eye.

This Idea Must Die is garrulous and argumentative. I expected no less: Brockman’s formula is tried and tested. Better still, it shows no sign of getting old.

 

Maths into English

One to Nine by Andrew Hodges and The Tiger that Isn’t by Michael Blastland and Andrew Dilnot
reviewed for the Telegraph, 22 September 2007

Twenty-four years have passed since Andrew Hodges published his biography of the mathematician Alan Turing. Hodges, a long-term member of the Mathematical Physics Research Group at Oxford, has spent the years since exploring the “twistor geometry” developed by Roger Penrose, writing music and dabbling with self-promotion.

Follow the link to One to Nine’s web page, and you will soon be stumbling over the furniture of Hodges’s other lives: his music, his sexuality, his ambitions for his self?published novel – the usual spillage. He must be immune to bathos, or blind to it. But why should he care what other people think? He knows full well that, once put in the right order, these base metals will be transformed.

“Writing,” says Hodges, “is the business of turning multi?dimensional facts and ideas into a one?dimensional string of symbols.”

One to Nine – ostensibly a simple snapshot of the mathematical world – is a virtuoso stream of consciousness containing everything important there is to say about numbers (and Vaughan Williams, and climate change, and the Pet Shop Boys) in just over 300 pages. It contains multitudes. It is cogent, charming and deeply personal, all at once.

“Dense” does not begin to describe it. There is extraordinary concision at work. Hodges covers colour space and colour perception in two or three pages. The exponential constant e requires four pages. These examples come from the extreme shallow end of the mathematical pool: there are depths here not everyone will fathom. But this is the point: One to Nine makes the unfathomable enticing and gives the reader tremendous motivation to explore further.

This is a consciously old-fashioned conceit. One to Nine is modelled on Constance Reid’s 1956 classic, From Zero to Infinity. Like Reid’s, each of Hodges’s chapters explores the ideas associated with a given number. Mathematicians are quiet iconoclasts, so this is work that each generation must do for itself.

When Hodges considers his own contributions (in particular, to the mathematics underpinning physical reality), the skin tightens over the skull: “The scientific record of the past century suggests that this chapter will soon look like faded pages from Eddington,” he writes. (Towards the end of his life, Sir Arthur Eddington, who died in 1944, assayed a “theory of everything”. Experimental evidence ran counter to his work, which today generates only intermittent interest.)

But then, mathematics “does not have much to do with optimising personal profit or pleasure as commonly understood”.

The mordant register of his prose serves Hodges as well as it served Turing all those years ago. Like Turing: the Enigma, One to Nine proceeds, by subtle indirection, to express a man through his numbers.

If you think organisations, economies or nations would be more suited to mathematical description, think again. Michael Blastland and Andrew Dilnot’s The Tiger that Isn’t contains this description of the International Passenger Survey, the organisation responsible for producing many of our immigration figures:

The ferry heaves into its journey and, equipped with their passenger vignettes, the survey team members also set off, like Attenboroughs in the undergrowth, to track down their prey, and hope they all speak English. And so the tides of people swilling about the world?… are captured for the record if they travel by sea, when skulking by slot machines, half?way through a croissant, or off to the ladies’ loo.

Their point is this: in the real world, counting is back-breaking labour. Those who sieve the world for numbers – surveyors, clinicians, statisticians and the rest – are engaged in difficult work, and the authors think it nothing short of criminal the way the rest of us misinterpret, misuse or simply ignore their hard-won results. This is a very angry and very funny book.

The authors have worked together before, on the series More or Less – BBC Radio 4’s antidote to the sort of bad mathematics that mars personal decision-making, political debate, most press releases, and not a few items from the corporation’s own news schedule.

Confusion between correlation and cause, wild errors in the estimation of risk, the misuse of averages: Blastland and Dilnot round up and dispatch whole categories of woolly thinking.

They have a positive agenda. A handful of very obvious mathematical ideas – ideas they claim (with a certain insouciance) are entirely intuitive – are all we need to wield the numbers for ourselves; with them, we will be better informed, and will make more realistic decisions.

This is one of those maths books that claims to be self?help, and on the evidence presented here, we are in dire need of it. A late chapter contains the results of a general knowledge quiz given to senior civil servants in 2005.

The questions were simple enough. Among them: what share of UK income tax is paid by the top one per cent of earners? For the record, in 2005 it was 21 per cent. Our policy?makers didn’t have a clue.

“The deepest pitfall with numbers owes nothing to the numbers themselves and much to the slack way they are treated, with carelessness all the way to contempt.”

This jolly airport read will not change all that. But it should stir things up a bit.