Two hundred years of electro-foolery come good

Reading We Are Electric by Sally Adee for the Times, 28 January 2023

In an attempt to elucidate the role of electricity in biology, German polymath Alexander von Humboldt once stuck a charged wire up his bum and found that “a bright light appears before both eyes”.

Why the study of biological electricity should prove so irremediably smutty — so that serious ”electricians” (as the early researchers called themselves) steered well clear of bodies for well over a century — is a mystery science journalist Sally Adee would rather not have to re-hash, though her by-the-by account of “two hundred years of electro-foolery”, during which quacks peddled any number of cockeyed devices to treat everything from cancer to excessive masturbation, is highly entertaining.

And while this history of electricity’s role in the body begins, conventionally enough, with Volta and Galvani, with spasming frog’s legs and other fairly gruesome experiments, this is really just necessary groundwork, so that Adee can better explain recent findings that are transforming our understanding of how bodies grow and develop, heal and regenerate.

Why bodies turn out the way they do has proved a vexing puzzle for the longest while. Genetics offers no answer, as DNA contains no spatial information. There are genes for, say, eye colour, but no genes for “grow two eyes”, and no genes for “stick two eyes in front of your head”

So if genes don’t tell us the shape we should take as we grow, what does? The clue is in the title: we are, indeed, electric.

Adee explains that the forty trillion or so cells in our bodies are in constant electrical communication with each other. This chatter generates a field that dictates the form we take. For every structure in the body there is a specific membrane voltage range, and our cells specialise to perform different functions in line with the electrical cues they pick up from their neighbours. Which is (by way of arresting illustration) how in 2011 a grad student by the name of Sherry Aw managed, by manipulating electrical fields, to grow eyes on a developing frog’s belly.

The wonder is that this news will come as such a shock to so many readers (including, I dare say, many jobbing scientists). That our cells communicate electrically with each other without the mediation of nerves, and that the nervous system is only one of at least two (and probably many more) electrical communications systems — all this will come as a disconcerting surprise to many. Did you know you only have to put skin, bone, blood, nerve — indeed, any biological cell — into a petri dish and apply an electric field, and you will find all the cells will crawl to the same end of the dish? It’s taken decades before anyone thought to unpick the enormous implications of that fact.

Now we have begun to understand the importance of electrical fields in biology, we can begin to manipulate them. We’ve begun to restore some function after severe spinal injury (in humans) regrown whole limbs (in mice), and even turned cancerous tumours back into healthy tissue (in petri dishes).

Has bio-electricity — once the precinct of quacks and contrarians — at last come into its own? Has it matured? Has it grown up?

Well, yes and no. Adee would like to deliver a clear, single message about bioelectricity, but the field itself is still massively divided. On the one hand there are ground-breaking researches being conducted into development, regeneration and healing. On the other, there are those who think electricity in the body is mostly to do with nerves and brains, and their project — to hack peoples’ minds through their central nervous systems and usher in some sort of psychoelectric utopia — shows no sign of faltering.

In the 1960s the American neurophysiologist Warren McCulloch worked on the assumption that the way neurons fire is a kind of biological binary code. this led to a new school of thought, called cybernetics — a science of communications and automatic control systems, both living and mechanical. The idea was we should be able to drive an animal like a robot by simply activating specific circuits, an idea “so compelling” says Adee, “there wasn’t much point bothering with whether it was based in fact.”

Very many other researchers Adee writes about are just as wedded to the idea of the body as a meat machine.

This book arose from an article Adee wrote for the magazine New Scientist about her experiences playing DARWARS Ambush!, a military training simulation conducted in a Californian defence lab that (maybe) amped up her response times and (maybe) increased her focus — all by means of a headset that magnetically tickled precise regions in her brain.

Within days of the article’s publication in early 2012, Adee had become a sort of Joan of Arc figure for the online posthumanist community, and even turns up in Noah Yuval Harai’s book, where she serves as an Awful Warning about men becoming gods.

Adee finally admits that she would “love to take this whole idea of the body as an inferior meat puppet to be augmented with metal and absolutely launch it into the sun.” Coming clean at last, she admits she is much more interested in the basic research going on into the communications within and between individual cells — a field where the more we know, the more we realise just how much we don’t understand.

Adee’s enthusiasm is infectious, and she conveys well the jaw-dropping scale and complexity of this newly discovered “electrome”. This is more than medicine. “The real excitement of the field,” she writes, “hews closer to the excitement around cosmology.”

A cherry is a cherry is a cherry

Life is Simple: How Occam’s Razor Sets Science Free and Shapes the Universe
by Johnjoe McFadden, reviewed for the Spectator, 28 August 2021

Astonishing, where an idea can lead you. You start with something that, 800 years hence, will sound like it’s being taught at kindergarten: Fathers are fathers, not because they are filled with some “essence of fatherhood”, but because they have children.

Fast forward a few years, and the Pope is trying to have you killed.

Not only have you run roughshod over his beloved eucharist (justified, till then, by some very dodgy Aristotelian logic-chopping); you’re also saying there’s no “essence of kinghood”, neither. If kings are only kings because they have subjects, then, said William of Occam, “power should not be entrusted to anyone without the consent of all”. Heady stuff for 1334.

How this progression of thought birthed the very idea of modern science, is the subject of what may be the most sheerly enjoyable history of science of recent years.

William was born around 1288 in the little town of Ockham in Surrey. He was probably an orphan; at any rate he was given to the Franciscan order around the age of eleven. He shone at Greyfriars in London, and around 1310 was dispatched to Oxford’s newfangled university.

All manner of intellectual, theological and political shenanigans followed, mostly to do with William’s efforts to demolish almost the entire edifice of medieval philosophy.

It needed demolishing, and that’s because it still held to Aristotle’s ideas about what an object is. Aristotle wondered how single objects and multiples can co-exist. His solution: categorise everything. A cherry is a cherry is a cherry, and all cherries have cherryness in common. A cherry is a “universal”; the properties that might distinguish one cherry from another are “accidental”.

The trouble with Aristotle’s universals, though, is that they assume a one-to-one correspondence between word and thing, and posit a universe made up of a terrifying number of unique things — at least one for each noun or verb in the language.

And the problem with that is that it’s an engine for making mistakes.

Medieval philosophy relied largely on syllogistic reasoning, juggling things into logical-looking relations. “Socrates is a man, all men are mortal, so Socrates is mortal.”

So he is, but — and this is crucial — this conclusion is arrived at more by luck than good judgement. The statement isn’t “true” in any sense; it’s merely internally consistent.

Imagine we make a mistake. Imagine we spring from a society where beards are pretty much de rigeur (classical Athens, say, or Farringdon Road). Imagine we said, “Socrates is a man, all men have beards, therefore Socrates has a beard”?

Though one of its premises is wrong, the statement barrels ahead regardless; it’s internally consistent, and so, if you’re not paying attention, it creates the appearance of truth.

But there’s worse: the argument that gives Socates a beard might actually be true. Some men do have beards. Socrates may be one of them. And if he is, that beard seems — again, if you’re not paying attention — to confirm a false assertion.

William of Occam understood that our relationship with the world is a lot looser, cloudier, and more indeterminate than syllogistic logic allows. That’s why, when a tavern owner hangs a barrel hoop outside his house, passing travellers know they can stop there for a drink. The moment words are decoupled from things, then they act as signs, negotiating flexibly with a world of blooming, buzzing confusion.

Once we take this idea to heart, then very quickly — and as a matter of taste more than anything — we discover how much more powerful straightforward explanations are than complicated ones. Occam came up with a number of versions of what even then was not an entirely new idea: “It is futile to do with more what can be done with less,” he once remarked. Subsequent formulations do little but gild this lily.

His idea proved so powerful, three centuries later the French theologian Libert Froidmont coined the term “Occam’s razor”, to describe how we arrive at good explanations by shaving away excess complexity. As McFadden shows, that razor’s still doing useful work.

Life is Simple is primarily a history of science, tracing William’s dangerous idea through astronomy, cosmology, physics and biology, from Copernicus to Brahe, Kepler to Newton, Darwin to Mendel, Einstein to Noether to Weyl. But McFadden never loses sight of William’s staggering, in some ways deplorable influence over the human psyche as a whole. For if words are independent of things, how do we know what’s true?

Thanks to William of Occam, we don’t. The universe, after Occam, is unknowable. Yes, we can come up with explanations of things, and test them against observation and experience; but from here on in, our only test of truth will be utility. Ptolemy’s 2nd-century Almagest, a truly florid description of the motions of the stars and planetary paths, is not and never will be *wrong*; the worst we can say is that it’s overcomplicated.

In the Coen brothers’ movie The Big Lebowski, an exasperated Dude turns on his friend: “You’re not *wrong*, Walter” he cries, “you’re just an asshole.” William of Occam is our universal Walter, and the first prophet of our disenchantment. He’s the friend we wish we’d never listened to, when he told us Father Christmas was not real.

Know when you’re being played

Calling Bullshit by Jevin D West and Carl T Bergstrom, and Science Fictions by Stuart Ritchie, reviewed for The Telegraph, 8 August 2020

Last week I received a press release headlined “1 in 4 Brits say ‘No’ to Covid vaccine”. This is was such staggeringly bad news, I decided it couldn’t possibly be true. And sure enough, it wasn’t.

Armed with the techniques taught me by biologist Carl Bergstrom and data scientist Jevin West, I “called bullshit” on this unwelcome news, which after all bore all the hallmarks of clickbait.

For a start, the question on which the poll was based was badly phrased. On closer reading it turns out that 25 per cent would decline if the government “made a Covid-19 vaccine available tomorrow”. Frankly, if it was offered *tomorrow* I’d be a refusenik myself. All things being equal, I prefer my medicines tested first.

But what of the real meat of the claim — that daunting figure of “25 per cent”?  It turns out that a sample of 2000 was selected from a sample of 17,000 drawn from the self-selecting community of subscribers to a lottery website. But hush my cynicism: I am assured that the sample of 2000 was “within +/-2% of ONS quotas for Age, Gender, Region, SEG, and 2019 vote, using machine learning”. In other words, some effort has been made to make the sample of 2000 representative of the UK population (but only on five criteria, which is not very impressive. And that whole “+/-2%” business means that up to 40 of the sample weren’t representative of anything).

For this, “machine learning” had to be employed (and, later, “a proprietary machine learning system”)? Well, of course not.  Mention of the miracle that is artificial intelligence is almost always a bit of prestidigitation to veil the poor quality of the original data. And anyway, no amount of “machine learning” can massage away the fact that the sample was too thin to serve the sweeping conclusions drawn from it (“Only 1 in 5 Conservative voters (19.77%) would say No” — it says, to two decimal places, yet!) and is anyway drawn from a non-random population.

Exhausted yet? Then you may well find Calling Bullshit essential reading. Even if you feel you can trudge through verbal bullshit easily enough, this book will give you the tools to swim through numerical snake-oil. And this is important, because numbers easily slip  past the defences we put up against mere words. Bergstrom and West teach a course at the University of Washington from which this book is largely drawn, and hammer this point home in their first lecture: “Words are human constructs,” they say; “Numbers seem to come directly from nature.”

Shake off your naive belief in the truth or naturalness of the numbers quoted in new stories and advertisements, and you’re half way towards knowing when you’re being played.

Say you diligently applied the lessons in Calling Bullshit, and really came to grips with percentages, causality, selection bias and all the rest. You may well discover that you’re now ignoring everything — every bit of health advice, every over-wrought NASA announcement about life on Mars, every economic forecast, every exit poll. Internet pioneer Jaron Lanier reached this point last year when he came up with Ten Arguments for Deleting Your Social Media Accounts Right Now. More recently the best-selling Swiss pundit Rolf Dobelli has ordered us to Stop Reading the News. Both deplore our current economy of attention, which values online engagement over the provision of actual information (as when, for instance, a  review like this one gets headlined “These Two Books About Bad Data Will Break Your Heart”; instead of being told what the piece is about, you’re being sold on the promise of an emotional experience).

Bergstrom and West believe that public education can save us from this torrent of micro-manipulative blither. Their book is a handsome contribution to that effort. We’ve lost Lanier and Dobelli, but maybe the leak can be stopped up. This, essentially, is what the the authors are about; they’re shoring up the Enlightenment ideal of a civic society governed by reason.

Underpinning this ideal is science, and the conviction that the world is assembled on a bedrock of truth fundamentally unassailable truths.

Philosophical nit-picking apart, science undeniably works. But in Science Fictions Stuart Ritchie, a psychologist based at King’s College, shows just how contingent and gimcrack and even shoddy the whole business can get. He has come to praise science, not to bury it; nevertheless, his analyses of science’s current ethical ills — fraud, hype, negligence and so on — are devastating.

The sheer number of problems besetting the scientific endeavour becomes somewhat more manageable once we work out which ills are institutional, which have to do with how scientists communicate, and which are existential problems that are never going away whatever we do.

Our evolved need to express meaning through stories is an existential problem. Without stories, we can do no thinking worth the name, and this means that we are always going to prioritise positive findings over negative ones, and find novelties more charming than rehearsed truths.

Such quirks of the human intellect can be and have been corrected by healthy institutions at least some of the time over the last 400-odd years. But our unruly mental habits run wildly out of control once they are harnessed to a media machine driven by attention.  And the blame for this is not always easily apportioned: “The scenario where an innocent researcher is minding their own business when the media suddenly seizes on one of their findings and blows it out of proportion is not at all the norm,” writes Ritchie.

It’s easy enough to mount a defence of science against the tin-foil-hat brigade, but Ritchie is attempting something much more discomforting: he’s defending science against scientists. Fraudulent and negligent individuals fall under the spotlight occasionally, but institutional flaws are Ritchie’s chief target.

Reading Science Fictions, we see field after field fail to replicate results, correct mistakes, identify the best lines of research, or even begin to recognise talent. In Ritchie’s proffered bag of solutions are desperately needed reforms to the way scientific work is published and cited, and some more controversial ideas about how international mega-collaborations may enable science to catch up on itself and check its own findings effectively (or indeed at all, in the dismal case of economic science).

At best, these books together offer a path back to a civic life based on truth and reason. At worst, they point towards one that’s at least a little bit defended against its own bullshit. Time will tell whether such efforts can genuinely turning the ship around, or are simply here to entertain us with a spot of deckchair juggling. But there’s honest toil here, and a lot of smart thinking with it. Reading both, I was given a fleeting, dizzying reminder of what it once felt like to be a free agent in a factual world.

Pig-philosophy

Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.

The weather forecast: a triumph hiding in plain sight

Reading The Weather Machine by Andrew Blum (Bodley Head) for the Telegraph, 6 July 2019

Reading New York journalist Andrew Blum’s new book has cured me of a foppish and annoying habit. I no longer dangle an umbrella off my arm on sunny days, tripping up my fellow commuters before (inevitably) mislaying the bloody thing on the train to Coulsdon Town. Very late, and to my considerable embarrassment, I have discovered just how reliable the weather forecast is.

My thoroughly English prejudice against the dark art of weather prediction was already set by the time the European Centre for Medium-Range Weather Forecasts opened in Reading in 1979. Then the ECMWF claimed to be able to see three days into the future. Six years later, it could see five days ahead. It knew about Sandy, the deadliest hurricane of 2012, eight days ahead, and it expects to predict high-impact events a fortnight before they happen by the year 2025.

The ECMWF is a world leader, but it’s not an outlier. Look at the figures: weather forecasts have been getting consistently better for 40 straight years. Blum reckons this makes the current global complex of machines, systems, networks and acronyms (and there are lots of acronyms) “a high point of science and technology’s aspirations for society”.

He knows this is a minority view: “The weather machine is a wonder we treat as a banality,” he writes: “a tool that we haven’t yet learned to trust.” The Weather Machine is his attempt to convey the technical brilliance and political significance of an achievement that hides in plain sight.

The machine’s complexity alone is off all familiar charts, and sets Blum significant challenge. “As a rocket scientist at the Jet Propulsion Laboratory put it to me… landing a spacecraft on Mars requires dealing with hundreds of variables,” he writes; “making a global atmospheric model requires hundreds of thousands.” Blum does an excellent job of describing how meteorological theory and observation were first stitched together, and why even today their relationship is a stormy one.

His story opens in heroic times, with Robert FitzRoy one of his more engaging heroes. Fitzroy is best remembered for captaining the HMS Beagle and weathering the puppyish enthusiasm of a young Charles Darwin. But his real claim to fame is as a meteorologist. He dreamt up the term “forecast”, turned observations into predictions that saved sailors’ lives, and foresaw with clarity what a new generation of naval observers would look like. Distributed in space and capable of communicating instantaneously with each other, they would be “as if an eye in space looked down on the whole North Atlantic”.

You can’t produce an accurate forecast from observation alone, however. You also need a theory of how the weather works. The Norwegian physicist Vilhelm Bjerknes came up with the first mathematical model of the weather: a set of seven interlinked partial differential equations that handled the fact that the atmosphere is a far from ideal fluid. Sadly, Bjerknes’ model couldn’t yet predict anything — as he himself said, solutions to his equations “far exceed the means of today’s mathematical analysis”. As we see our models of the weather evolve, so we see works of individual genius replaced by systems of machine computation. In the observational realm, something similar happens: the heroic efforts of individual observers throw up trickles of insight that are soon subsumed in the torrent of data streaming from the orbiting artefacts of corporate and state engineering.

The American philosopher Timothy Morton dreamt up the term “hyperobject” to describe things that are too complex and numinous to describe in the plain terms. Blum, whose earlier book was Tubes: Behind the Scenes at the Internet (2012), fancies his chances at explaining human-built hyperobjects in solid, clear terms, without recourse to metaphor and poesy. In this book, for example, he recognises the close affinity of military and meteorological infrastructures (the staple of many a modish book on the surveillance state), but resists any suggestion that they are the same system.

His sobriety is impressive, given how easy it is to get drunk on this stuff. In October 1946, technicians at the White Sands Proving Ground in Nevada installed a camera in the nose cone of a captured V2, and by launching it, yielded photographs of a quarter of the US — nearly a million square miles banded by clouds “stretching hundreds of miles in rows like streets”. This wasn’t the first time a bit of weather kit acted as an expendable test in a programme of weapons development, and it certainly wasn’t the last. Today’s global weather system has not only benefited from military advancements in satellite positioning and remote sensing; it has made those systems possible. Blum allows that “we learned to see the whole earth thanks to the technology built to destroy the whole earth”. But he avoids paranoia.

Indeed, he is much more impressed by the way countries at hammer and tongs with each other on the political stage nevertheless collaborated closely and well on a global weather infrastructure. Point four of John F Kennedy’s famous 1961 speech on “Urgent National Needs” called for “a satellite system for worldwide weather observation”, and it wasn’t just militarily useful American satellites he had in mind for the task: in 1962 Harry Wexler of the U.S. Weather Bureau worked with his Soviet counterpart Viktor Bugaev on a report proposing a “World Weather Watch”, and by 1963 there was, Blum finds, “a conscious effort by scientists — on both sides of the Iron Curtain, in all corners of the earth — to design an integrated and coordinated apparatus” — this at a time when weather satellites were so expensive they could be justified only on national security grounds.

Blum’s book comes a little bit unstuck at the end. A final chapter that could easily have filled a third of the book is compressed into just a few pages’ handwaving and special pleading, as he conjures up a vision of a future in which the free and global nature of weather information has ceased to be a given and the weather machine, that “last bastion of international cooperation”, has become just one more atomised ghost of a future the colonial era once promised us.

Why end on such a minatory note? The answer, which is by no means obvious, is to be found in Reading. Today 22 nations pay for the ECMWF’s maintenance of a pair of Cray supercomputers. The fastest in the world, these machines must be upgraded every two years. In the US, meanwhile, weather observations rely primarily on the health of four geostationary satellites, at a cost of 11 billion dollars. (America’s whole National Weather Service budget costs only around $1billion.)

Blum leaves open the question, How is an organisation built by nation-states, committed to open data and borne of a global view, supposed to work in a world where information lives on private platforms and travels across private networks — a world in which billions of tiny temperature and barometric sensors, “in smartphones, home devices, attached to buildings, buses or airliners,” are aggregated by the likes of Google, IBM or Amazon?

One thing is disconcertingly clear: Blum’s weather machine, which in one sense is a marvel of continuing modernity, is also, truth be told, a dinosaur. It is ripe for disruption, of a sort that the world, grown so reliant on forecasting, could well do without.

The Usefulness of Useless Knowledge

Reading The Usefulness of Useless Knowledge by Abraham Flexner, and Knowledge for Sale: The neoliberal takeover of higher education by Lawrence Busch for New Scientist, 17 March 2017

 

IN 1930, the US educator Abraham Flexner set up the Institute for Advanced Study, an independent research centre in Princeton, New Jersey, where leading lights as diverse as Albert Einstein and T. S. Eliot could pursue their studies, free from everyday pressures.

For Flexner, the world was richer than the imagination could conceive and wider than ambition could encompass. The universe was full of gifts and this was why pure, “blue sky” research could not help but turn up practical results now and again, of a sort quite impossible to plan for.

So, in his 1939 essay “The usefulness of useless knowledge”, Flexner listed a few of the practical gains that have sprung from what we might, with care, term scholastic noodling. Electromagnetism was his favourite. We might add quantum physics.

Even as his institute opened its doors, the world’s biggest planned economy, the Soviet Union, was conducting a grand and opposite experiment, harnessing all the sciences for their immediate utility and problem-solving ability.

During the cold war, the vast majority of Soviet scientists were reduced to mediocrity, given only sharply defined engineering problems to solve. Flexner’s better-known affiliates, meanwhile, garnered reputations akin to those enjoyed by other mascots of Western intellectual liberty: abstract-expressionist artists and jazz musicians.

At a time when academia is once again under pressure to account for itself, the Princeton University Press reprint of Flexner’s essay is timely. Its preface, however, is another matter. Written by current institute director Robbert Dijkgraaf, it exposes our utterly instrumental times. For example, he employs junk metrics such as “more than half of all economic growth comes from innovation”. What for Flexner was a rather sardonic nod to the bottom line, has become for Dijkgraaf the entire argument – as though “pure research” simply meant “long-term investment”, and civic support came not from existential confidence and intellectual curiosity, but from scientists “sharing the latest discoveries and personal stories”. So much for escaping quotidian demands.

We do not know what the tightening of funding for scientific research that has taken place over the past 40 years would have done for Flexner’s own sense of noblesse oblige. But this we can be sure of: utilitarian approaches to higher education are dominant now, to the point of monopoly. The administrative burdens and stultifying oversight structures throttling today’s scholars come not from Soviet-style central planning, but from the application of market principles – an irony that the sociologist Lawrence Busch explores exhaustively in his monograph Knowledge for Sale.

Busch explains how the first neo-liberal thinkers sought to prevent the rise of totalitarian regimes by replacing governance with markets. Those thinkers believed that markets were safer than governments because they were cybernetic and so corrected themselves. Right?

Wrong: Busch provides ghastly disproofs of this neo-liberal vision from within the hall of academe, from bad habits such as a focus on counting citations and publication output, through fraud, to existential crises such as the shift in the ideal of education from a public to a private good. But if our ingenious, post-war market solution to the totalitarian nightmare of the 1940s has itself turned out to be a great vampire squid wrapped around the face of humanity (as journalist Matt Taibbi once described investment bank Goldman Sachs), where have we left to go?

Flexner’s solution requires from us a confidence that is hard to muster right now. We have to remember that the point of study is not to power, enable, de-glitch or otherwise save civilisation. The point of study is to create a civilisation worth saving.

“Some only appear crazy. Others are as mad as a bag of cats”

unnamed

Stalin’s more eccentric scientists are the subject of this blogpost for Faber & Faber.

Stalin and the Scientists describes what happened when, early in the twentieth century, a handful of impoverished and under-employed graduates, professors and entrepreneurs, collectors and charlatans, bound themselves to a failing government to create a world superpower. Envied and obsessed over by Joseph Stalin — ‘the Great Scientist’ himself — scientists in disciplines from physics to psychology managed to steer his empire through famine, drought, soil exhaustion, war, rampant alcoholism, a huge orphan problem, epidemics and an average life expectancy of thirty years. Hardly any of them are well known outside Russia, yet their work shaped global progress for well over a century.

Cold War propaganda cast Soviet science as an eccentric, gimcrack, often sinister enterprise. And, to my secret delight, not every wild story proved to be a fabrication. Indeed, a heartening amount of the smoke shrouding Soviet scientific achievement can be traced back to intellectual arson attacks of one sort or another.

I’ll leave it to the book to explain why Stalin’s scientists deserve our admiration and respect. This is the internet, so let’s have some fun. Here, in no particular order, are my my top five scientific eccentrics. Some only appear crazy; others have had craziness thrust upon them by hostile commentators. Still others were as mad as a bag of cats.

1. Ilya Ivanov
Ilya Ivanov, the animal breeding expert who tried to mate humans with chimpanzees

By the time of the 1917 revolution, Ilya Ivanov was already an international celebrity. His pioneering artificial insemination techniques were transforming world agriculture. However, once he lost his Tsarist patrons, he had to find a research programme that would catch the eye of the new government’s Commissariat of Education. What he came up with was certainly compelling: a proposal to cross-breed humans and chimpanzees.

We now know there are immunological difficulties preventing such a cross, but the basic idea is not at all crazy, and Ivanov got funding from Paris and America to travel to Guinea to further the study.

Practically and ethically the venture was a disaster. Arriving at the primate centre in Kindia, Ivanov discovered that its staff were killing and maiming far more primates than they ever managed to capture. To make matters worse, after a series of gruesome and rapine attempts to impregnate chimpanzees with human sperm, Ivanov decided it might be easier to turn the experiment on its head and fertilise African women with primate sperm. Unfortunately, he failed to tell them what he was doing.

Ivanov was got rid of during the Purges of the late 1930s thanks to a denunciation by an ambitious colleague, but his legacy survives. The primate sanctuary he founded in Sukhumi by the Black Sea provided primates for the Soviet space programme. Meanwhile the local tourist industry makes the most of, and indeed maintains, persistent rumours that the local woods are haunted by seven-foot-tall Stalinist ape-men.

2. Alexander Bogdanov
whose Mars-set science fiction laid the groundwork for the Soviet Union’s first blood transfusion service — and who died of blood poisoning

Alexander Alexandrovich Bogdanov, co-founder of the Bolshevik movement, lost interest in politics, even as control came within his grasp, because he wanted more time for his writing.

In his novels Red Star and Engineer Menni, blood exchanges among his Martian protagonists level out their individual and sexual differences and extend their lifespan through the inheritance of acquired characteristics.

These scientific fantasies took an experimental turn in 1921 during a trade junket to London when he happened across Blood Transfusion, a book by Geoffrey Keynes (younger brother of the economist). Two years of private experiments followed, culminating in an appointment with the Communist Party’s general secretary, Joseph Stalin. Bogdanov was quickly installed as head of a new ‘scientific research institute of blood transfusion’.

Blood, Bogdanov claimed, was a universal tissue that unified all other organs, tissues and cells. Transfusions offered the client better sleep, a fresher complexion, a change in eyeglass prescriptions, and greater resistance to fatigue. On 24 March 1928 he conducted a typically Martian experiment, mutually transfusing blood with a male student, suffered a massive transfusion reaction and died two weeks later at the age of fifty-four.

Bogdanov the scientist never offered up his studies to the review of his peers. In fact he never wrote any actual science at all, just propaganda for the popular press. In this, he resembled no-one so much as the notorious charlatan (and Stalin’s poster boy) Trofim Lysenko. I reckon it was his example made Trofim Lysenko politically possible.

3. Trofim Lysenko
Stalin’s poster-boy, who believed plants sacrifice themselves for their strongest neighbour — and was given the job of reforesting European Russia.

Practical, working-class, ambitious and working for the common good, the agrobiologist Trofim Lysenko was the very model of the new Soviet scientist. Rather than studying ‘the hairy legs of flies’, ran one Pravda profile in August 1927, this sober young man ‘went to the root of things,’ solving practical problems by a few calculations ‘on a little old piece of paper’.

As he studied how different varieties of the same crop responded to being planted at different times, he never actually touched any mathematics, relying instead on crude theories ‘proved’ by arbitrary examples.

Lysenko wanted, above all else, to be an original. An otherwise enthusiastic official report warned that he was an ‘extremely egotistical person, deeming himself to be a new Messiah of biological science.’ Unable to understand the new-fangled genetics, he did everything he could to banish it from biology. In its place he championed ‘vernalisation’, a planting technique that failed dismally to increase yields. Undeterred, he went on to theorise about species formation, and advised the government on everything, from how to plant oak trees across the entire Soviet Union to how to increase the butterfat content of milk. The practical results of his advices were uniformly disastrous and yet, through a combination of belligerence, working-class credentials, and a phenomenal amount of luck, he remained the poster-boy of Soviet agriculture right up until the fall of Khrushchev in 1964.

Nor is his ghost quite laid to rest. A couple of politically motivated historians are even now attempting to recast Lysenko as a cruelly sidelined pioneer of epigenetics (the study of how the environment regulates gene expression). This is a cruel irony, since Soviet Russia really was the birthplace of epigenetics! And it was Lysenko’s self-serving campaigns that saw that every single worker in that field was sacked and ruined.

4. Olga Lepeshinskaya
who screened in reverse films of rotting eggs to prove her theories about cell development — and won a Stalin Prize

Olga Lepeshinskaya, a personal friend of Lenin and his wife, was terrifyingly well-connected and not remotely intimidated by power. On a personal level, she was charming. She fiercely opposed anti-semitism, and had dedicated her personal life to the orphan problem, bringing up at least half a dozen children as her own.

As a scientist, however, she was a disaster. She once announced to the Academic Council of the Institute of Morphology that soda baths could rejuvenate the old and preserve the youth of the young. A couple of weeks later Moscow completely sold out of baking soda.

In her old age, Lepeshinskaya became entranced by the mystical concept of the ‘vital substance’, and recruited her extended family to work in her ‘laboratory’, pounding beetroot seeds in a pestle to demonstrate that any part of the seed could germinate. She even claimed to have filmed living cells emerge from noncellular materials. Lysenko hailed Lepeshinskaya’s discovery as the basis for a new theory of species formation, and in May 1950 Alexander Oparin, head of the Academy of Sciences’ biology department, invited Olga Lepeshinskaya to receive her Stalin Prize.

It was all a fraud, of course: she had been filming the death and decomposition of cells, then running her film backwards through the projector. Lepeshinskaya made a splendid myth. The subject of poetry. The heroine of countless plays. In school and university textbooks she was hailed as the author of the greatest biological discovery of all time.

5. Joseph Stalin
whose obsession with growing lemons in Siberia became his only hobby

Stalin, typically for his day, believed in the inheritance of acquired characteristics – that a giraffe that has to stretch to reach high leaves will have long-necked children. He assumed that, given the right conditions, living things were malleable, and as the years went by this obsession grew. In 1946 he became especially keen on lemons, not only encouraging their growth in coastal Georgia, where they fared quite well, but also in the Crimea, where winter frosts destroyed them.

Changing the nature of lemons became Stalin’s sole hobby. At his dachas near Moscow and in the south, large greenhouses were erected so that he could enter them directly from the house, day or night. Pruning shrubs and plants was his only physical exercise.

Stalin shared with his fellow Bolsheviks the idea that they had to be philosophers in order to deserve their mandate. He schooled the USSR’s most prominent philosopher, Georgy Aleksandrov, on Hegel’s role in the history of Marxism. He told the composer Dmitry Shostakovich how to change the orchestration for the new national anthem. He commissioned the celebrated war poet Konstantin Simonov to write a play about a famous medical controversy, treated him to an hour of literary criticism, and then rewrote the closing scenes himself. Sergei Eisenstein and his scriptwriter on Ivan the Terrible Part Two were treated to a filmmaking masterclass. And in 1950, while he was negotiating a pact with the People’s Republic of China, and discussing how to invade South Korea with Kim Il Sung, Stalin was also writing a combative article about linguistics, and meeting with economists multiple times to discuss a textbook.

Stalin’s paranoia eventually pushed him into pronouncements that were more and more peculiar. Unable to trust even himself, it came to Joseph Stalin that people were, or ought to be, completely readable from first to last. All it needed was an entirely verbal theory of mind. ‘There is nothing in the human being which cannot be verbalised,’ he asserted, in 1949. ‘What a person hides from himself he hides from society. There is nothing in the Soviet society that is not expressed in words. There are no naked thoughts. There exists nothing at all except words.’

For Stalin, in the end, even a person’s most inner world was readable – because if it wasn’t, then it couldn’t possibly exist.

 

 

A feast of bad ideas

This Idea Must Die: Scientific theories that are blocking progress, edited by John Brockman (Harper Perennial)

for New Scientist, 10 March 2015

THE physicist Max Planck had a bleak view of scientific progress. “A new scientific truth does not triumph by convincing its opponents…” he wrote, “but rather because its opponents eventually die.”

This is the assumption behind This Idea Must Die, the latest collection of replies to the annual question posed by impresario John Brockman on his stimulating and by now venerable online forum, Edge. The question is: which bits of science do we want to bury? Which ideas hold us back, trip us up or send us off in a futile direction?

Some ideas cited in the book are so annoying that we would be better off without them, even though they are true. Take “brain plasticity”. This was a real thing once upon a time, but the phrase spread promiscuously into so many corners of neuroscience that no one really knows what it means any more.

More than any amount of pontification (and readers wouldn’t believe how many new books agonise over what “science” was, is, or could be), Brockman’s posse capture the essence of modern enquiry. They show where it falls away into confusion (the use of cause-and-effect thinking in evolution), into religiosity (virtually everything to do with consciousness) and cant (for example, measuring nuclear risks with arbitrary yardsticks).

This is a book to argue with – even to throw against the wall at times. Several answers, cogent in themselves, still hit nerves. When Kurt Gray and Richard Dawkins, for instance, stick their knives into categorisation, I was left wondering whether scholastic hand-waving would really be an improvement. And Malthusian ideas about resources inevitably generate more heat than light when harnessed to the very different agendas of Matt Ridley and Andrian Kreye.

On the other hand, there is pleasure in seeing thinkers forced to express themselves in just a few hundred words. I carry no flag for futurist Douglas Rushkoff or psychologist Susan Blackmore, but how good to be wrong-footed. Their contributions are among the strongest, with Rushkoff discussing godlessness and Blackmore on the relationship between brain and consciousness.

Every reader will have a favourite. Mine is palaeontologist Julia Clarke’s plea that people stop asking her where feathered dinosaurs leave off and birds begin. Clarke offers lucid glimpses of the complexities and ambiguities inherent in deciphering the behaviour of long-vanished animals from thin fossil data. The next person to ask about the first bird will probably get a cake fork in their eye.

This Idea Must Die is garrulous and argumentative. I expected no less: Brockman’s formula is tried and tested. Better still, it shows no sign of getting old.