Pig-philosophy

Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.

The weather forecast: a triumph hiding in plain sight

Reading The Weather Machine by Andrew Blum (Bodley Head) for the Telegraph, 6 July 2019

Reading New York journalist Andrew Blum’s new book has cured me of a foppish and annoying habit. I no longer dangle an umbrella off my arm on sunny days, tripping up my fellow commuters before (inevitably) mislaying the bloody thing on the train to Coulsdon Town. Very late, and to my considerable embarrassment, I have discovered just how reliable the weather forecast is.

My thoroughly English prejudice against the dark art of weather prediction was already set by the time the European Centre for Medium-Range Weather Forecasts opened in Reading in 1979. Then the ECMWF claimed to be able to see three days into the future. Six years later, it could see five days ahead. It knew about Sandy, the deadliest hurricane of 2012, eight days ahead, and it expects to predict high-impact events a fortnight before they happen by the year 2025.

The ECMWF is a world leader, but it’s not an outlier. Look at the figures: weather forecasts have been getting consistently better for 40 straight years. Blum reckons this makes the current global complex of machines, systems, networks and acronyms (and there are lots of acronyms) “a high point of science and technology’s aspirations for society”.

He knows this is a minority view: “The weather machine is a wonder we treat as a banality,” he writes: “a tool that we haven’t yet learned to trust.” The Weather Machine is his attempt to convey the technical brilliance and political significance of an achievement that hides in plain sight.

The machine’s complexity alone is off all familiar charts, and sets Blum significant challenge. “As a rocket scientist at the Jet Propulsion Laboratory put it to me… landing a spacecraft on Mars requires dealing with hundreds of variables,” he writes; “making a global atmospheric model requires hundreds of thousands.” Blum does an excellent job of describing how meteorological theory and observation were first stitched together, and why even today their relationship is a stormy one.

His story opens in heroic times, with Robert FitzRoy one of his more engaging heroes. Fitzroy is best remembered for captaining the HMS Beagle and weathering the puppyish enthusiasm of a young Charles Darwin. But his real claim to fame is as a meteorologist. He dreamt up the term “forecast”, turned observations into predictions that saved sailors’ lives, and foresaw with clarity what a new generation of naval observers would look like. Distributed in space and capable of communicating instantaneously with each other, they would be “as if an eye in space looked down on the whole North Atlantic”.

You can’t produce an accurate forecast from observation alone, however. You also need a theory of how the weather works. The Norwegian physicist Vilhelm Bjerknes came up with the first mathematical model of the weather: a set of seven interlinked partial differential equations that handled the fact that the atmosphere is a far from ideal fluid. Sadly, Bjerknes’ model couldn’t yet predict anything — as he himself said, solutions to his equations “far exceed the means of today’s mathematical analysis”. As we see our models of the weather evolve, so we see works of individual genius replaced by systems of machine computation. In the observational realm, something similar happens: the heroic efforts of individual observers throw up trickles of insight that are soon subsumed in the torrent of data streaming from the orbiting artefacts of corporate and state engineering.

The American philosopher Timothy Morton dreamt up the term “hyperobject” to describe things that are too complex and numinous to describe in the plain terms. Blum, whose earlier book was Tubes: Behind the Scenes at the Internet (2012), fancies his chances at explaining human-built hyperobjects in solid, clear terms, without recourse to metaphor and poesy. In this book, for example, he recognises the close affinity of military and meteorological infrastructures (the staple of many a modish book on the surveillance state), but resists any suggestion that they are the same system.

His sobriety is impressive, given how easy it is to get drunk on this stuff. In October 1946, technicians at the White Sands Proving Ground in Nevada installed a camera in the nose cone of a captured V2, and by launching it, yielded photographs of a quarter of the US — nearly a million square miles banded by clouds “stretching hundreds of miles in rows like streets”. This wasn’t the first time a bit of weather kit acted as an expendable test in a programme of weapons development, and it certainly wasn’t the last. Today’s global weather system has not only benefited from military advancements in satellite positioning and remote sensing; it has made those systems possible. Blum allows that “we learned to see the whole earth thanks to the technology built to destroy the whole earth”. But he avoids paranoia.

Indeed, he is much more impressed by the way countries at hammer and tongs with each other on the political stage nevertheless collaborated closely and well on a global weather infrastructure. Point four of John F Kennedy’s famous 1961 speech on “Urgent National Needs” called for “a satellite system for worldwide weather observation”, and it wasn’t just militarily useful American satellites he had in mind for the task: in 1962 Harry Wexler of the U.S. Weather Bureau worked with his Soviet counterpart Viktor Bugaev on a report proposing a “World Weather Watch”, and by 1963 there was, Blum finds, “a conscious effort by scientists — on both sides of the Iron Curtain, in all corners of the earth — to design an integrated and coordinated apparatus” — this at a time when weather satellites were so expensive they could be justified only on national security grounds.

Blum’s book comes a little bit unstuck at the end. A final chapter that could easily have filled a third of the book is compressed into just a few pages’ handwaving and special pleading, as he conjures up a vision of a future in which the free and global nature of weather information has ceased to be a given and the weather machine, that “last bastion of international cooperation”, has become just one more atomised ghost of a future the colonial era once promised us.

Why end on such a minatory note? The answer, which is by no means obvious, is to be found in Reading. Today 22 nations pay for the ECMWF’s maintenance of a pair of Cray supercomputers. The fastest in the world, these machines must be upgraded every two years. In the US, meanwhile, weather observations rely primarily on the health of four geostationary satellites, at a cost of 11 billion dollars. (America’s whole National Weather Service budget costs only around $1billion.)

Blum leaves open the question, How is an organisation built by nation-states, committed to open data and borne of a global view, supposed to work in a world where information lives on private platforms and travels across private networks — a world in which billions of tiny temperature and barometric sensors, “in smartphones, home devices, attached to buildings, buses or airliners,” are aggregated by the likes of Google, IBM or Amazon?

One thing is disconcertingly clear: Blum’s weather machine, which in one sense is a marvel of continuing modernity, is also, truth be told, a dinosaur. It is ripe for disruption, of a sort that the world, grown so reliant on forecasting, could well do without.

The Usefulness of Useless Knowledge

Reading The Usefulness of Useless Knowledge by Abraham Flexner, and Knowledge for Sale: The neoliberal takeover of higher education by Lawrence Busch for New Scientist, 17 March 2017

 

IN 1930, the US educator Abraham Flexner set up the Institute for Advanced Study, an independent research centre in Princeton, New Jersey, where leading lights as diverse as Albert Einstein and T. S. Eliot could pursue their studies, free from everyday pressures.

For Flexner, the world was richer than the imagination could conceive and wider than ambition could encompass. The universe was full of gifts and this was why pure, “blue sky” research could not help but turn up practical results now and again, of a sort quite impossible to plan for.

So, in his 1939 essay “The usefulness of useless knowledge”, Flexner listed a few of the practical gains that have sprung from what we might, with care, term scholastic noodling. Electromagnetism was his favourite. We might add quantum physics.

Even as his institute opened its doors, the world’s biggest planned economy, the Soviet Union, was conducting a grand and opposite experiment, harnessing all the sciences for their immediate utility and problem-solving ability.

During the cold war, the vast majority of Soviet scientists were reduced to mediocrity, given only sharply defined engineering problems to solve. Flexner’s better-known affiliates, meanwhile, garnered reputations akin to those enjoyed by other mascots of Western intellectual liberty: abstract-expressionist artists and jazz musicians.

At a time when academia is once again under pressure to account for itself, the Princeton University Press reprint of Flexner’s essay is timely. Its preface, however, is another matter. Written by current institute director Robbert Dijkgraaf, it exposes our utterly instrumental times. For example, he employs junk metrics such as “more than half of all economic growth comes from innovation”. What for Flexner was a rather sardonic nod to the bottom line, has become for Dijkgraaf the entire argument – as though “pure research” simply meant “long-term investment”, and civic support came not from existential confidence and intellectual curiosity, but from scientists “sharing the latest discoveries and personal stories”. So much for escaping quotidian demands.

We do not know what the tightening of funding for scientific research that has taken place over the past 40 years would have done for Flexner’s own sense of noblesse oblige. But this we can be sure of: utilitarian approaches to higher education are dominant now, to the point of monopoly. The administrative burdens and stultifying oversight structures throttling today’s scholars come not from Soviet-style central planning, but from the application of market principles – an irony that the sociologist Lawrence Busch explores exhaustively in his monograph Knowledge for Sale.

Busch explains how the first neo-liberal thinkers sought to prevent the rise of totalitarian regimes by replacing governance with markets. Those thinkers believed that markets were safer than governments because they were cybernetic and so corrected themselves. Right?

Wrong: Busch provides ghastly disproofs of this neo-liberal vision from within the hall of academe, from bad habits such as a focus on counting citations and publication output, through fraud, to existential crises such as the shift in the ideal of education from a public to a private good. But if our ingenious, post-war market solution to the totalitarian nightmare of the 1940s has itself turned out to be a great vampire squid wrapped around the face of humanity (as journalist Matt Taibbi once described investment bank Goldman Sachs), where have we left to go?

Flexner’s solution requires from us a confidence that is hard to muster right now. We have to remember that the point of study is not to power, enable, de-glitch or otherwise save civilisation. The point of study is to create a civilisation worth saving.

“Some only appear crazy. Others are as mad as a bag of cats”

unnamed

Stalin’s more eccentric scientists are the subject of this blogpost for Faber & Faber.

Stalin and the Scientists describes what happened when, early in the twentieth century, a handful of impoverished and under-employed graduates, professors and entrepreneurs, collectors and charlatans, bound themselves to a failing government to create a world superpower. Envied and obsessed over by Joseph Stalin — ‘the Great Scientist’ himself — scientists in disciplines from physics to psychology managed to steer his empire through famine, drought, soil exhaustion, war, rampant alcoholism, a huge orphan problem, epidemics and an average life expectancy of thirty years. Hardly any of them are well known outside Russia, yet their work shaped global progress for well over a century.

Cold War propaganda cast Soviet science as an eccentric, gimcrack, often sinister enterprise. And, to my secret delight, not every wild story proved to be a fabrication. Indeed, a heartening amount of the smoke shrouding Soviet scientific achievement can be traced back to intellectual arson attacks of one sort or another.

I’ll leave it to the book to explain why Stalin’s scientists deserve our admiration and respect. This is the internet, so let’s have some fun. Here, in no particular order, are my my top five scientific eccentrics. Some only appear crazy; others have had craziness thrust upon them by hostile commentators. Still others were as mad as a bag of cats.

1. Ilya Ivanov
Ilya Ivanov, the animal breeding expert who tried to mate humans with chimpanzees

By the time of the 1917 revolution, Ilya Ivanov was already an international celebrity. His pioneering artificial insemination techniques were transforming world agriculture. However, once he lost his Tsarist patrons, he had to find a research programme that would catch the eye of the new government’s Commissariat of Education. What he came up with was certainly compelling: a proposal to cross-breed humans and chimpanzees.

We now know there are immunological difficulties preventing such a cross, but the basic idea is not at all crazy, and Ivanov got funding from Paris and America to travel to Guinea to further the study.

Practically and ethically the venture was a disaster. Arriving at the primate centre in Kindia, Ivanov discovered that its staff were killing and maiming far more primates than they ever managed to capture. To make matters worse, after a series of gruesome and rapine attempts to impregnate chimpanzees with human sperm, Ivanov decided it might be easier to turn the experiment on its head and fertilise African women with primate sperm. Unfortunately, he failed to tell them what he was doing.

Ivanov was got rid of during the Purges of the late 1930s thanks to a denunciation by an ambitious colleague, but his legacy survives. The primate sanctuary he founded in Sukhumi by the Black Sea provided primates for the Soviet space programme. Meanwhile the local tourist industry makes the most of, and indeed maintains, persistent rumours that the local woods are haunted by seven-foot-tall Stalinist ape-men.

2. Alexander Bogdanov
whose Mars-set science fiction laid the groundwork for the Soviet Union’s first blood transfusion service — and who died of blood poisoning

Alexander Alexandrovich Bogdanov, co-founder of the Bolshevik movement, lost interest in politics, even as control came within his grasp, because he wanted more time for his writing.

In his novels Red Star and Engineer Menni, blood exchanges among his Martian protagonists level out their individual and sexual differences and extend their lifespan through the inheritance of acquired characteristics.

These scientific fantasies took an experimental turn in 1921 during a trade junket to London when he happened across Blood Transfusion, a book by Geoffrey Keynes (younger brother of the economist). Two years of private experiments followed, culminating in an appointment with the Communist Party’s general secretary, Joseph Stalin. Bogdanov was quickly installed as head of a new ‘scientific research institute of blood transfusion’.

Blood, Bogdanov claimed, was a universal tissue that unified all other organs, tissues and cells. Transfusions offered the client better sleep, a fresher complexion, a change in eyeglass prescriptions, and greater resistance to fatigue. On 24 March 1928 he conducted a typically Martian experiment, mutually transfusing blood with a male student, suffered a massive transfusion reaction and died two weeks later at the age of fifty-four.

Bogdanov the scientist never offered up his studies to the review of his peers. In fact he never wrote any actual science at all, just propaganda for the popular press. In this, he resembled no-one so much as the notorious charlatan (and Stalin’s poster boy) Trofim Lysenko. I reckon it was his example made Trofim Lysenko politically possible.

3. Trofim Lysenko
Stalin’s poster-boy, who believed plants sacrifice themselves for their strongest neighbour — and was given the job of reforesting European Russia.

Practical, working-class, ambitious and working for the common good, the agrobiologist Trofim Lysenko was the very model of the new Soviet scientist. Rather than studying ‘the hairy legs of flies’, ran one Pravda profile in August 1927, this sober young man ‘went to the root of things,’ solving practical problems by a few calculations ‘on a little old piece of paper’.

As he studied how different varieties of the same crop responded to being planted at different times, he never actually touched any mathematics, relying instead on crude theories ‘proved’ by arbitrary examples.

Lysenko wanted, above all else, to be an original. An otherwise enthusiastic official report warned that he was an ‘extremely egotistical person, deeming himself to be a new Messiah of biological science.’ Unable to understand the new-fangled genetics, he did everything he could to banish it from biology. In its place he championed ‘vernalisation’, a planting technique that failed dismally to increase yields. Undeterred, he went on to theorise about species formation, and advised the government on everything, from how to plant oak trees across the entire Soviet Union to how to increase the butterfat content of milk. The practical results of his advices were uniformly disastrous and yet, through a combination of belligerence, working-class credentials, and a phenomenal amount of luck, he remained the poster-boy of Soviet agriculture right up until the fall of Khrushchev in 1964.

Nor is his ghost quite laid to rest. A couple of politically motivated historians are even now attempting to recast Lysenko as a cruelly sidelined pioneer of epigenetics (the study of how the environment regulates gene expression). This is a cruel irony, since Soviet Russia really was the birthplace of epigenetics! And it was Lysenko’s self-serving campaigns that saw that every single worker in that field was sacked and ruined.

4. Olga Lepeshinskaya
who screened in reverse films of rotting eggs to prove her theories about cell development — and won a Stalin Prize

Olga Lepeshinskaya, a personal friend of Lenin and his wife, was terrifyingly well-connected and not remotely intimidated by power. On a personal level, she was charming. She fiercely opposed anti-semitism, and had dedicated her personal life to the orphan problem, bringing up at least half a dozen children as her own.

As a scientist, however, she was a disaster. She once announced to the Academic Council of the Institute of Morphology that soda baths could rejuvenate the old and preserve the youth of the young. A couple of weeks later Moscow completely sold out of baking soda.

In her old age, Lepeshinskaya became entranced by the mystical concept of the ‘vital substance’, and recruited her extended family to work in her ‘laboratory’, pounding beetroot seeds in a pestle to demonstrate that any part of the seed could germinate. She even claimed to have filmed living cells emerge from noncellular materials. Lysenko hailed Lepeshinskaya’s discovery as the basis for a new theory of species formation, and in May 1950 Alexander Oparin, head of the Academy of Sciences’ biology department, invited Olga Lepeshinskaya to receive her Stalin Prize.

It was all a fraud, of course: she had been filming the death and decomposition of cells, then running her film backwards through the projector. Lepeshinskaya made a splendid myth. The subject of poetry. The heroine of countless plays. In school and university textbooks she was hailed as the author of the greatest biological discovery of all time.

5. Joseph Stalin
whose obsession with growing lemons in Siberia became his only hobby

Stalin, typically for his day, believed in the inheritance of acquired characteristics – that a giraffe that has to stretch to reach high leaves will have long-necked children. He assumed that, given the right conditions, living things were malleable, and as the years went by this obsession grew. In 1946 he became especially keen on lemons, not only encouraging their growth in coastal Georgia, where they fared quite well, but also in the Crimea, where winter frosts destroyed them.

Changing the nature of lemons became Stalin’s sole hobby. At his dachas near Moscow and in the south, large greenhouses were erected so that he could enter them directly from the house, day or night. Pruning shrubs and plants was his only physical exercise.

Stalin shared with his fellow Bolsheviks the idea that they had to be philosophers in order to deserve their mandate. He schooled the USSR’s most prominent philosopher, Georgy Aleksandrov, on Hegel’s role in the history of Marxism. He told the composer Dmitry Shostakovich how to change the orchestration for the new national anthem. He commissioned the celebrated war poet Konstantin Simonov to write a play about a famous medical controversy, treated him to an hour of literary criticism, and then rewrote the closing scenes himself. Sergei Eisenstein and his scriptwriter on Ivan the Terrible Part Two were treated to a filmmaking masterclass. And in 1950, while he was negotiating a pact with the People’s Republic of China, and discussing how to invade South Korea with Kim Il Sung, Stalin was also writing a combative article about linguistics, and meeting with economists multiple times to discuss a textbook.

Stalin’s paranoia eventually pushed him into pronouncements that were more and more peculiar. Unable to trust even himself, it came to Joseph Stalin that people were, or ought to be, completely readable from first to last. All it needed was an entirely verbal theory of mind. ‘There is nothing in the human being which cannot be verbalised,’ he asserted, in 1949. ‘What a person hides from himself he hides from society. There is nothing in the Soviet society that is not expressed in words. There are no naked thoughts. There exists nothing at all except words.’

For Stalin, in the end, even a person’s most inner world was readable – because if it wasn’t, then it couldn’t possibly exist.

 

 

A feast of bad ideas

This Idea Must Die: Scientific theories that are blocking progress, edited by John Brockman (Harper Perennial)

for New Scientist, 10 March 2015

THE physicist Max Planck had a bleak view of scientific progress. “A new scientific truth does not triumph by convincing its opponents…” he wrote, “but rather because its opponents eventually die.”

This is the assumption behind This Idea Must Die, the latest collection of replies to the annual question posed by impresario John Brockman on his stimulating and by now venerable online forum, Edge. The question is: which bits of science do we want to bury? Which ideas hold us back, trip us up or send us off in a futile direction?

Some ideas cited in the book are so annoying that we would be better off without them, even though they are true. Take “brain plasticity”. This was a real thing once upon a time, but the phrase spread promiscuously into so many corners of neuroscience that no one really knows what it means any more.

More than any amount of pontification (and readers wouldn’t believe how many new books agonise over what “science” was, is, or could be), Brockman’s posse capture the essence of modern enquiry. They show where it falls away into confusion (the use of cause-and-effect thinking in evolution), into religiosity (virtually everything to do with consciousness) and cant (for example, measuring nuclear risks with arbitrary yardsticks).

This is a book to argue with – even to throw against the wall at times. Several answers, cogent in themselves, still hit nerves. When Kurt Gray and Richard Dawkins, for instance, stick their knives into categorisation, I was left wondering whether scholastic hand-waving would really be an improvement. And Malthusian ideas about resources inevitably generate more heat than light when harnessed to the very different agendas of Matt Ridley and Andrian Kreye.

On the other hand, there is pleasure in seeing thinkers forced to express themselves in just a few hundred words. I carry no flag for futurist Douglas Rushkoff or psychologist Susan Blackmore, but how good to be wrong-footed. Their contributions are among the strongest, with Rushkoff discussing godlessness and Blackmore on the relationship between brain and consciousness.

Every reader will have a favourite. Mine is palaeontologist Julia Clarke’s plea that people stop asking her where feathered dinosaurs leave off and birds begin. Clarke offers lucid glimpses of the complexities and ambiguities inherent in deciphering the behaviour of long-vanished animals from thin fossil data. The next person to ask about the first bird will probably get a cake fork in their eye.

This Idea Must Die is garrulous and argumentative. I expected no less: Brockman’s formula is tried and tested. Better still, it shows no sign of getting old.

 

Come journey with me to Zochonis TH A (B5)!

Thecanalmanchesterlancashire1925

‘Putting the Science in Fiction’ – an Interfaculty Symposium on Science and Entertainment – takes place there on Wednesday 25 April 9:30am to 5pm.

Zochonis TH A (B5) is, in fact, in Manchester. Well, it’s a bit of Manchester University. Oh, I don’t know, I’ll just turn up early and find some corridor to sit down in and start screaming; someone’s bound to find me and steer me to the right place sooner or later.

Once there, I’ll find myself in good company. Confirmed speakers include Stephen Baxter,  Ken MacLeod, Alastair Reynolds, Geoff Ryman (the eminence grise behind this junket), Justina Robson and Matthew Cobb, among many others.

Watch us all “forge new relationships between the scientific community and the arts/entertainment community”. There is no cost for the workshop, but spaces are limited so you will need to book a place by contacting scienceinfiction.manchester@gmail.com

And visit http://bit.ly/yxgLGQ

It won’t tell you where Zochonis TH A (B5) is, but at least you’ll know I’m not making it up.

What Soviet science did for us

Untitled

I’m preparing a series of talks for Pushkin House in London, to tie in with a long project on science under Joseph Stalin. While we’re finalising the programme, these notes will give you an idea what to expect.

Russia’s Other Culture: science and technology in 20th century.

 

Early in the twentieth century, a few marginal scientists bound themselves to a bankrupt government to create a world superpower. Russia’s political elites embraced science, patronised it, fetishized it, and even tried to impersonate it. Many Soviet scientists led a charmed life. Others were ruined by their closeness to power. Four illustrated talks reveal how this stormy marriage between science and state has shaped the modern world.

 

1. The Men Who Fell to Earth: How Russia’s pilots, parachutists and pioneers won the space race.
November 2011.

 

In the 1950s and 1960s Sergei Korolev and the Soviet space programme laid a path to the stars. Now Russia is our only lifeline to the technologies and machines we have put in orbit. Simon Ings is joined by Doug Millard, Senior Curator of ICT & Space Technology at London’s Science Museum, to trace Russia’s centuries-old obsession with flight. This was the nation that erected skydiving towers in its playgrounds, built planes so large and so strange, the rest of the world thought they were fakes, and outdid Germany and the US in its cinematic portrayal of space. The nation’s soaring imagination continues to astonish the world.

 

The talk coincides with 50th anniversary of pioneering space travel by Yuri Gagarin

 

2. Prospectors: Why Russia sits on plenty and never gets rich
January 2012

 

The old boast ran that Russia governed an empire with more surface area than the visible moon. Still, 40 per cent of it lay under permafrost, and no Romanov before Alexander II so much as set foot in Siberia. Defying nature, the Bolsheviks forcibly industrialized the region, built factories and cities, and operated industries in some of the most forbidding places on the planet. Beginning with the construction of the Transsiberian railway, and ending with the planting of the Russian flag on the bottom of the Arctic Ocean, this is a story of visionaries and idealists, traitors, despots, and the occasional fool.

 

The talk will form part of a week of activity marking the fifth anniversary of Pushkin House’s establishment in Bloomsbury.

 

3. Red Harvest: What Russia’s famines taught us about the living world.    
March 2012

 

After the civil war, the Bolsheviks turned to the revolutionary science of genetics for help in securing the Soviet food supply. The young Soviet Union became a world leader in genetics and shared its knowledge with Germany. Then Stalin’s impatience and suspicion destroyed the field and virtually wiped out Russian agriculture. Stalin was right to be suspicious: genetics had promised the world a future of health and longevity, but by the 1940s it was delivering death camps and human vivisection. Genetic advances have made possible our world of plenty – but why did the human cost have to be so high?

 

4. “General Healthification”: Russia’s unsung sciences of the mind.
May 2012

 

The way we teach and care for our children owes much to a handful of largely forgotten Russian pioneers. Years after their deaths, the psychoanalyst Sabina Spielrein, the psychologist Lev Vygotsky and the pioneering neuroscientist Alexander Luria have an unseen influence over our everyday thinking. In our factories and offices, too, Soviet psychology plays a role, fitting us to our tasks, ensuring our safety and our health. Our assumptions about health care and the role of the state all owe a huge debt to the Soviet example. But these ideas have a deeper history. Many of them originated in America. The last lecture in this series celebrates the fertile yet largely forgotten intellectual love affair between America and the young Soviet Union.