Just how much does the world follow laws?

zebra

How the Zebra Got its Stripes and Other Darwinian Just So Stories by Léo Grasset
The Serengeti Rules: The quest to discover how life works and why it matters by Sean B. Carroll
Lysenko’s Ghost: Epigenetics and Russia by Loren Graham
The Great Derangement: Climate change and the unthinkable by Amitav Ghosh
reviewed for New Scientist, 15 October 2016

JUST how much does the world follow laws? The human mind, it seems, may not be the ideal toolkit with which to craft an answer. To understand the world at all, we have to predict likely events and so we have a lot invested in spotting rules, even when they are not really there.

Such demands have also shaped more specialised parts of culture. The history of the sciences is one of constant struggle between the accumulation of observations and their abstraction into natural laws. The temptation (especially for physicists) is to assume these laws are real: a bedrock underpinning the messy, observable world. Life scientists, on the other hand, can afford no such assumption. Their field is constantly on the move, a plaything of time and historical contingency. If there is a lawfulness to living things, few plants and animals seem to be aware of it.

Consider, for example, the charming “just so” stories in French biologist and YouTuber Léo Grasset’s book of short essays, How the Zebra Got its Stripes. Now and again Grasset finds order and coherence in the natural world. His cost-benefit analysis of how animal communities make decisions, contrasting “autocracy” and “democracy”, is a fine example of lawfulness in action.

But Grasset is also sharply aware of those points where the cause-and-effect logic of scientific description cannot show the whole picture. There are, for instance, four really good ways of explaining how the zebra got its stripes, and those stripes arose probably for all those reasons, along with a couple of dozen others whose mechanisms are lost to evolutionary history.

And Grasset has even more fun describing the occasions when, frankly, nature goes nuts. Take the female hyena, for example, which has to give birth through a “pseudo-penis”. As a result, 15 per cent of mothers die after their first labour and 60 per cent of cubs die at birth. If this were a “just so” story, it would be a decidedly off-colour one.

The tussle between observation and abstraction in biology has a fascinating, fraught and sometimes violent history. In Europe at the birth of the 20th century, biology was still a descriptive science. Life presented, German molecular biologist Gunther Stent observed, “a near infinitude of particulars which have to be sorted out case by case”. Purely descriptive approaches had exhausted their usefulness and new, experimental approaches were developed: genetics, cytology, protozoology, hydrobiology, endocrinology, experimental embryology – even animal psychology. And with the elucidation of underlying biological process came the illusion of control.

In 1917, even as Vladimir Lenin was preparing to seize power in Russia, the botanist Nikolai Vavilov was lecturing to his class at the Saratov Agricultural Institute, outlining the task before them as “the planned and rational utilisation of the plant resources of the terrestrial globe”.

Predicting that the young science of genetics would give the next generation the ability “to sculpt organic forms at will”, Vavilov asserted that “biological synthesis is becoming as much a reality as chemical”.

The consequences of this kind of boosterism are laid bare in Lysenko’s Ghost by the veteran historian of Soviet science Loren Graham. He reminds us what happened when the tentatively defined scientific “laws” of plant physiology were wielded as policy instruments by a desperate and resource-strapped government.

Within the Soviet Union, dogmatic views on agrobiology led to disastrous agricultural reforms, and no amount of modern, politically motivated revisionism (the especial target of Graham’s book) can make those efforts seem more rational, or their aftermath less catastrophic.

In modern times, thankfully, a naive belief in nature’s lawfulness, reflected in lazy and increasingly outmoded expressions such as “the balance of nature”, is giving way to a more nuanced, self-aware, even tragic view of the living world. The Serengeti Rules, Sean B. Carroll’s otherwise triumphant account of how physiology and ecology turned out to share some of the same mathematics, does not shy away from the fact that the “rules” he talks about are really just arguments from analogy.

“If there is a lawfulness to living things, few plants and animals seem to be aware of it”
Some notable conservation triumphs have led from the discovery that “just as there are molecular rules that regulate the numbers of different kinds of molecules and cells in the body, there are ecological rules that regulate the numbers and kinds of animals and plants in a given place”.

For example, in Gorongosa National Park, Mozambique, in 2000, there were fewer than 1000 elephants, hippos, wildebeest, waterbuck, zebras, eland, buffalo, hartebeest and sable antelopes combined. Today, with the reintroduction of key predators, there are almost 40,000 animals, including 535 elephants and 436 hippos. And several of the populations are increasing by more than 20 per cent a year.

But Carroll is understandably flummoxed when it comes to explaining how those rules might apply to us. “How can we possibly hope that 7 billion people, in more than 190 countries, rich and poor, with so many different political and religious beliefs, might begin to act in ways for the long-term good of everyone?” he asks. How indeed: humans’ capacity for cultural transmission renders every Serengeti rule moot, along with the Serengeti itself – and a “law of nature” that does not include its dominant species is not really a law at all.

Of course, it is not just the sciences that have laws: the humanities and the arts do too. In The Great Derangement, a book that began as four lectures presented at the University of Chicago last year, the novelist Amitav Ghosh considers the laws of his own practice. The vast majority of novels, he explains, are realistic. In other words, the novel arose to reflect the kind of regularised life that gave you time to read novels – a regularity achieved through the availability of reliable, cheap energy: first, coal and steam, and later, oil.

No wonder, then, that “in the literary imagination climate change was somehow akin to extraterrestrials or interplanetary travel”. Ghosh is keenly aware of and impressively well informed about climate change: in 1978, he was nearly killed in an unprecedentedly ferocious tornado that ripped through northern Delhi, leaving 30 dead and 700 injured. Yet he has never been able to work this story into his “realist” fiction. His hands are tied: he is trapped in “the grid of literary forms and conventions that came to shape the narrative imagination in precisely that period when the accumulation of carbon in the atmosphere was rewriting the destiny of the Earth”.

The exciting and frightening thing about Ghosh’s argument is how he traces the novel’s narrow compass back to popular and influential scientific ideas – ideas that championed uniform and gradual processes over cataclysms and catastrophes.

One big complaint about science – that it kills wonder – is the same criticism Ghosh levels at the novel: that it bequeaths us “a world of few surprises, fewer adventures, and no miracles at all”. Lawfulness in biology is rather like realism in fiction: it is a convention so useful that we forget that it is a convention.

But, if anthropogenic climate change and the gathering sixth mass extinction event have taught us anything, it is that the world is wilder than the laws we are used to would predict. Indeed, if the world really were in a novel – or even in a book of popular science – no one would believe it.

Beware the indeterminate momentum of the throbbing whole

2ndfromright_speculative-realism-materialism

Graham Harman (2nd from right) and fellow speculative materialists in 2007

 

In 1942, the Argentine writer Jorge Luis Borges cooked up an entirely fictitious “Chinese” encyclopedia entry for animals. Among its nonsensical subheadings were “Embalmed ones”, “Stray dogs”, “Those that are included in this classification” and “Those that, at a distance, resemble flies”.

Explaining why these categories make no practical sense is a useful and enjoyable intellectual exercise – so much so that in in 1966 the French philosopher Michel Foucault wrote an entire book inspired by Borges’ notion. Les mots et les choses (The Order of Things) became one of the defining works of the French philosophical movement called structuralism.

How do we categorise the things we find in the world? In Immaterialism, his short and very sweet introduction to his own brand of philosophy, “object-oriented ontology”, the Cairo-based philosopher Graham Harman identifies two broad strategies. Sometimes we split things into their ingredients. (Since the enlightenment, this has been the favoured and extremely successful strategy of most sciences.) Sometimes, however, it’s better to work in the opposite direction, defining things by their relations with other things. (This is the favoured method of historians and critics and other thinkers in the humanities.)

Why should scientists care about this second way of thinking? Often they don’t have to. Scientists are specialists. Reductionism – finding out what things are made of – is enough for them.

Naturally, there is no hard and fast rule to be made here, and some disciplines – the life sciences especially – can’t always be reducing things to their components.

So there have been attempts to bring this other, “emergentist” way of thinking into the sciences. One of the most ingenious was the “new materialism” of the German entrepreneur (and Karl Marx’s sidekick) Friedrich Engels. One of Engels’s favourite targets was the Linnaean system of biological classification. Rooted in formal logic, this taxonomy divides all living things into species and orders. It offers us a huge snapshot of the living world. It is tremendously useful. It is true. But it has limits. It cannot record how one species may, over time, give rise to some other, quite different species. (Engels had great fun with the duckbilled platypus, asking where that fitted into any rigid scheme of things.) Similarly, there is no “essence” hiding behind a cloud of steam, a puddle of water, or a block of ice. There are only structures, succeeding each other in response to changes in the local conditions. The world is not a ready-made thing: it is a complex interplay of processes, all of which are ebbing and flowing, coming into being and passing away.

So far so good. Applied to science, however, Engels’ schema turn out to be hardly more than a superior species of hand-waving. Indeed, “dialectical materialism” (as it later became known) proved so unwieldy, it took very few years of application before it became a blunt weapon in the hands of Stalinist philosophers who used it to demotivate, discredit and disbar any scientific colleague whose politics they didn’t like.

Harman has learned the lessons of history well. Though he’s curious to know where his philosophy abuts scientific practice (and especially the study of evolution), he is prepared to accept that specialists know what they are doing: that rigor in a narrow field is a legitimate way of squeezing knowledge out of the world, and that a 126-page A-format paperback is probably not the place to reinvent the wheel.

What really agitates him, fills his pages, and drives him to some cracking one-liners (this is, heavens be praised, a *funny* book about philosophy) is the sheer lack of rigour to be found in his own sphere.

While pillorying scientists for treating objects as superficial compared with their tinest pieces, philosophers in the humanities have for more than a century been leaping off the opposite cliff, treating objects “as needlessly deep or spooky hypotheses”. By claiming that an object is nothing but its relations or actions they unknowingly repeat the argument of the ancient Megarians , “who claimed that no one is a house-builder unless they are currently building a house”. Harman is sick and tired of this intellectual fashion, by which “‘becoming’ is blessed as the trump card of innovators, while ‘being’ is cursed as a sad-sack regession to the archaic philosophies of olden times”.

Above all, Harman has had it with peers and colleagues who zoom out and away from every detailed question, until the very world they’re meant to be studying resembles “the indeterminate momentum of the throbbing whole” (and this is not a joke — this is the sincerely meant position statement of another philosopher, a friendly acquaintance of his, Jane Bennett).

So what’s Harman’s solution? Basically, he wants to be able to talk unapologetically about objects. He explores a single example: the history of the Dutch East India Company. Without toppling into the “great men” view of history – according to which a world of inanimate props is pushed about by a few arbitrarily privileged human agents – he is out to show that the EIC was an actual *thing*, a more-or-less stable phenomenon ripe for investigation, and not simply a rag-bag collection of “human practices”.

Does his philosophy describe the Dutch East India Company rigorously enough for his work to qualify as real knowledge? I think so. In fact I think he succeeds to a degree which will surprise, reassure and entertain the scientifically minded.

Be in no doubt: Harman is no turncoat. He does not want the humanities to be “more scientific”. He wants them to be less scientific, but no less rigorous, able to handle, with rigour and versatility, the vast and teeming world of things science cannot handle: “Hillary Clinton, the city of Odessa, Tolkein’s imaginary Rivendell… a severed limb, a mixed herd of zebras and wildebeest, the non-existent 2016 Chicago Summer Olympics, and the constellation of Scorpio”.

Immaterialism
Graham Harman
Polity, £9.99

The tomorrow person

gettyimages-480014817-800x533

You Belong to the Universe: Buckminster Fuller and the future by Jonathon Keats
reviewed for New Scientist, 11 June 2016.

 

IN 1927 the suicidal manager of a building materials company, Richard Buckminster (“Bucky”) Fuller, stood by the shores of Lake Michigan and decided he might as well live. A stern voice inside him intimated that his life after all had a purpose, “which could be fulfilled only by sharing his mind with the world”.

And share it he did, tirelessly for over half a century, with houses hung from masts, cars with inflatable wings, a brilliant and never-bettered equal-area map of the world, and concepts for massive open-access distance learning, domed cities and a new kind of playful, collaborative politics. The tsunami that Fuller’s wing flap set in motion is even now rolling over us, improving our future through degree shows, galleries, museums and (now and again) in the real world.

Indeed, Fuller’s”comprehensive anticipatory design scientists” are ten-a-penny these days. Until last year, they were being churned out like sausages by the design interactions department at the Royal College of Art, London. Futurological events dominate the agendas of venues across New York, from the Institute for Public Knowledge to the International Center of Photography. “Science Galleries”, too, are popping up like mushrooms after a spring rain, from London to Bangalore.

In You Belong to the Universe, Jonathon Keats, himself a critic, artist and self-styled “experimental philosopher”, looks hard into the mirror to find what of his difficult and sometimes pantaloonish hero may still be traced in the lineaments of your oh-so-modern “design futurist”.

Be in no doubt: Fuller deserves his visionary reputation. He grasped in his bones, as few have since, the dynamism of the universe. At the age of 21, Keats writes, “Bucky determined that the universe had no objects. Geometry described forces.”

A child of the aviation era, he used materials sparingly, focusing entirely on their tensile properties and on the way they stood up to wind and weather. He called this approach “doing more with less”. His light and sturdy geodesic dome became an icon of US ingenuity. He built one wherever his country sought influence, from India to Turkey to Japan.

Chapter by chapter, Keats asks how the future has served Fuller’s ideas on city planning, transport, architecture, education. It’s a risky scheme, because it invites you to set Fuller’s visions up simply to knock them down again with the big stick of hindsight. But Keats is far too canny for that trap. He puts his subject into context, works hard to establish what would and would not be reasonable for him to know and imagine, and explains why the history of built and manufactured things turned out the way it has, sometimes fulfilling, but more often thwarting, Fuller’s vision.

This ought to be a profoundly wrong-headed book, judging one man’s ideas against the entire recent history of Spaceship Earth (another of Fuller’s provocations). But You Belong to the Universe says more about Fuller and his future in a few pages than some whole biographies, and renews one’s interest – if not faith – in all those graduate design shows.

How we went from mere betting to gaming the world

Reviewing The Perfect Bet: How science and maths are taking the luck out of gambling by Adam Kucharski, for The Spectator, 7 May 2016.

If I prang your car, we can swap insurance details. In the past, it would have been necessary for you to kill me. That’s the great thing about money: it makes liabilities payable, and blood feud unnecessary.

Spare a thought, then, for the economist Robin Hanson whose idea it was, in the years following the World Trade Center attacks, to create a market where traders could speculate on political atrocities. You could invest in the likelihood of a biochemical attack, for example, or a coup d’etat, or the assassination of an Arab leader. The more knowledgeable you were, the more profit you would earn — but you would also be showing your hand to the Pentagon.

The US Senate responded with horror to this putative “market in death and destruction”, though if the recent BBC drama The Night Manager has taught us anything at all (beyond the passing fashionability of tomato-red chinos), it is that there is already a global market in death and destruction, and it is not at all well-abstracted. Its currency is lives and livelihoods. Its currency is blood. A little more abstraction, in this grim sphere, would be welcome.

Most books about money stop here, arrested — whether they admit it or not, in the park’n’ride zone of Francis Fukuyama’s 1989 essay “The End of History?” Adam Kucharski — a mathematician who lectures at the London School of Hygiene and Tropical Medicine — keeps his foot on the gas. The point of his book is that abstraction makes speculation, not just possible, but essential. Gambling isn’t any kind of “underside” to the legitimate economy. It is the economy’s entire basis, and “the line between luck and skill — and between gambling and investing — is rarely as clear as we think.” (204)

When we don’t know everything, we have to speculate to progress. Speculation is by definition an insecure business, so we put a great deal of effort into knowing everything. The hope is that, the more cards we count, and the more attention we pay to the spin of the wheel, the more accurate our bets will become. This is the meat of Kucharski’s book, and occasions tremendous, spirited accounts of observational, mathematical, and computational derring-do among the blackjack and roulette tables of Las Vegas and Monte Carlo. On one level, The Perfect Bet is a serviceable book about professional gambling.

When we come to the chapter on sports betting, however, the thin line between gambling and investment vanishes entirely, and Kucharski carries us into some strange territory indeed.

Lay a bet on a tennis match: “if one bookmaker is offering odds of 2.1 on Nadal and another is offering 2.1 on Djokovic, betting $100 on each player will net you $210 — and cost you $100 — whatever the result. Whoever wins, you walk away with a profit of $10.” (108) You don’t need to know anything about tennis. You don’t even need to know the result of the match.

Ten dollars is not a great deal of money, so these kinds of bets have to be made in bulk and at great speed to produce a healthy return. Which is where the robots come in: trading algorithms that — contrary to popular myth — are made simple (rarely running to more than ten lines of code) to keep them speedy. This is no small problem when you’re trying to automate the business of gaming the entire world. In 2013 — around the time the US Senate stumbled across Robin Hanson’s “policy market” idea, the S&P 500 stock index took a brief $136 billion dive when trading algorithms responded instantly to a malicious tweet claiming bombs had gone off in the White House.

The subtitle of Kucharski’s book states that “science and maths are taking the luck out of gambling”, and there’s little here to undercut the gloomy forecast. But Kucharski is also prosecuting a cleverer, more entertaining, and ultimately more disturbing line of argument. He is placing gambling at the heart of the body politic.

Risk reduction is every serious gambler’s avocation. The gambler is not there to take part. The gambler isn’t there to win. The gambler is there to find an edge, spot the tell, game the table, solve the market. The more parts, and the more interactions, the harder this is to do, but while it is true that the world is not simply deterministic, at a human scale, frankly, it might as well be.

In this smartphone-enabled and metadata-enriched world, complete knowledge of human affairs is becoming more or less possible. And imagine it: if we ever do crack our own markets, then the scope for individual action shrinks to a green zero. And we are done.

Is boredom good for us?

time

Sandi Mann’s The Upside of Downtime and Felt Time: The psychology of how we perceive time by Marc Wittmann reviewed for New Scientist, 13 April 2016.

 

VISITORS to New York’s Museum of Modern Art in 2010 got to meet time, face-to-face. For her show The Artist is Present, Marina Abramovic sat, motionless, for 7.5 hours at a stretch while visitors wandered past her.

Unlike all the other art on show, she hadn’t “dropped out” of time: this was no cold, unbreathing sculpture. Neither was she time’s plaything, as she surely would have been had some task engaged her. Instead, Marc Wittmann, a psychologist based in Freiburg, Germany, reckons that Abramovic became time.

Wittmann’s book Felt Time explains how we experience time, posit it and remember it, all in the same moment. We access the future and the past through the 3-second chink that constitutes our experience of the present. Beyond this interval, metronome beats lose their rhythm and words fall apart in the ear.

As unhurried and efficient as an ophthalmologist arriving at a prescription by placing different lenses before the eye, Wittmann reveals, chapter by chapter, how our view through that 3-second chink is shaped by anxiety, age, boredom, appetite and feeling.

Unfortunately, his approach smacks of the textbook, and his attempt at a “new solution to the mind-body problem” is a mess. However, his literary allusions – from Thomas Mann’s study of habituation in The Magic Mountain to Sten Nadolny’s evocation of the present moment in The Discovery of Slowness – offer real insight. Indeed, they are an education in themselves for anyone with an Amazon “buy” button to hand.

As we read Felt Time, do we gain most by mulling Wittmann’s words, even if some allusions are unfamiliar? Or are we better off chasing down his references on the internet? Which is the more interesting option? Or rather: which is “less boring”?

Sandi Mann’s The Upside of Downtime is also about time, inasmuch as it is about boredom.

Once we delighted in devices that put all knowledge and culture into our pockets. But our means of obtaining stimulation have become so routine that they have themselves become a source of boredom. By removing the tedium of waiting, says psychologist Mann, we have turned ourselves into sensation junkies. It’s hard for us to pay attention to a task when more exciting stimuli are on offer, and being exposed to even subtle distractions can make us feel more bored.

Sadly, Mann’s book demonstrates the point all too well. It is a design horror: a mess of boxed-out paragraphs and bullet-pointed lists. Each is entertaining in itself, yet together they render Mann’s central argument less and less engaging, for exactly the reasons she has identified. Reading her is like watching a magician take a bullet to the head while “performing” Russian roulette.

In the end Mann can’t decide whether boredom is a good or bad thing, while Wittmann’s more organised approach gives him the confidence he needs to walk off a cliff as he tries to use the brain alone to account for consciousness. But despite the flaws, Wittmann is insightful and Mann is engaging, and, praise be, there’s always next time.

 

Eugenic America: how to exclude almost everyone

imbeciles

Imbeciles: The Supreme Court, American eugenics, and the sterilization of Carrie Buck by Adam Cohen (Penguin Press)

Defectives in the Land: Disability and immigration in the age of eugenics by Douglas C. Baynton (University of Chicago Press)

for New Scientist, 22 March 2016

ONE of 19th-century England’s last independent “gentleman scientists”, Francis Galton was the proud inventor of underwater reading glasses, an egg-timer-based speedometer for cyclists, and a self-tipping top hat. He was also an early advocate of eugenics, and his Hereditary Genius was published two years after the first part of Karl Marx’s Das Kapital.

Both books are about the betterment of the human race: Marx supposed environment was everything; Galton assumed the same for heredity. “If a twentieth part of the cost and pains were spent in measures for the improvement of the human race that is spent on the improvement of the breed of horses and cattle,” he wrote, “what a galaxy of genius might we not create! We might introduce prophets and high priests of civilisation into the world, as surely as we… propagate idiots by mating cretins.”

What would such a human breeding programme look like? Would it use education to promote couplings that produced genetically healthy offspring? Or would it discourage or prevent pairings that would otherwise spread disease or dysfunction? And would it work by persuasion or by compulsion?

The study of what was then called degeneracy fell to a New York social reformer, Richard Louis Dugdale. During an 1874 inspection of a jail in New York State, Dugdale learned that six of the prisoners there were related. He traced the Jukes family tree back six generations, and found that some 350 people related to this family by blood or marriage were criminals, prostitutes or destitute.

Dugdale concluded that, like genius, “degeneracy” runs in families, but his response was measured. “The licentious parent makes an example which greatly aids in fixing habits of debauchery in the child. The correction,” he wrote, “is change of the environment… Where the environment changes in youth, the characteristics of heredity may be measurably altered.”

Other reformers were not so circumspect. An Indiana reformatory promptly launched a eugenic sterilisation effort, and in 1907 Indiana enacted the world’s first compulsory sterilisation statute. California followed suit in 1909. Between 1927 and 1979, Virginia forcibly sterilised at least 7450 “unfit” people. One of them was Carrie Buck, a woman labelled feeble-minded and kept ignorant of the details of her own case right up to the point in October 1927 when her fallopian tubes were tied and cauterised using carbolic acid and alcohol.

In Imbeciles, Adam Cohen follows Carrie Buck through the US court system, past the desks of one legal celebrity after the other, and not one of them, not Howard Taft, not Louis Brandeis, not Oliver Wendell Holmes Jr, gave a damn about her.

Cohen anatomises in pitiless detail how inept civil society can be at assimilating scientific ideas. He also does a good job explaining why attempts to manipulate the genetic make-up of whole populations can only fail to improve the genetic health of our species. Eugenics fails because it looks for genetic solutions to what are essentially cultural problems. The anarchist biologist Peter Kropotkin made this point as far back as 1912. Who were unfit, he asked the first international eugenics congress in London: workers or monied idlers? Those who produced degenerates in slums or those who produced degenerates in palaces? Culture casts a huge influence over the way we live our lives, hopelessly complicating our measures of strength, fitness and success.

Readers of Cohen’s book would also do well to watch out for Douglas Baynton’s Defectives in the Land, to be published in June. Focusing on immigrant experiences in New York, Baynton explains how ideas about genetics, disability, race, family life and employment worked together to exclude an extraordinarily diverse range of men and women from the shores of the US.

“Doesn’t this squashy sentimentality of a big minority of our people about human life make you puke?” Holmes once exclaimed. Holmes was a miserable bigot, but he wasn’t wrong to thirst for more rigour in our public discourse. History is not kind to bad ideas.

How the forces inside cells actually behave

animal electricity

Animal Electricity: How we learned that the body and brain are electric machines by Robert B. Campenot (Harvard University Press) for New Scientist, 9 March 2016.

IF YOU stood at arm’s length from someone and each of you had 1 per cent more electrons than protons, the force pushing the two of you apart would be enough to lift a “weight” equal to that of the entire Earth.

This startling observation, from Richard Feynman’s Lectures on Physics, so impressed cell biologist Robert Campenot he based quite a peculiar career around it. Not content with the mechanical metaphors of molecular biology, Campenot has studied living tissue as a delicate and complex mechanism that thrives by tweaking tiny imbalances in electrical charge.

If only the book were better prepared. Campenot’s enthusiasm for Feynman has him repeat the anecdote about lifting the world almost word for word, in the preface and introduction. Duplicating material is a surprisingly easy gaffe for a writer, and it is why we have editors. Where were they?

Campenot’s generous account ranges from Galvani’s discovery of animal electricity to the development of thought-controlled prosthetic limbs. He has high regard for popular science. But his is the rather fussy appreciation of the academic outsider who, uncertain of the form’s aesthetic potential, praises it for its utility. “The value of popularising science should never be underestimated because it occasionally attracts the attention of people who go on to make major contributions.” The pantaloonish impression he makes here is not wholly unrepresentative of the book.

Again, one might wish Campenot’s relationship with his editor had been more creative. Popular science writing rarely handles electricity well, let alone ion channels and membrane potentials. So, when it comes to developing suitable metaphors, Campenot is thrown on his own resources. His metaphors are as effective as one could wish for, but they suffer from repetition. One imagines the author wondering if he has done enough to nail his point, but with no one to reassure him.

Faults aside, this is a good book. Its mix of schoolroom electricity and sophisticated cell biology is highly eccentric but this, I think, speaks much in Campenot’s favour. The way organic tissue manipulates electricity, sending signals in broad electrical waves that can extend up to a third of a metre, is a dimension of biology we have taken on trust, domesticating it behind high-order metaphors drawn from computer science. Consequently, we have been unable to visualise how the forces in our cells actually behave. This was bound to turn out an odd endeavour. So be it. The odder, the better, in fact.

Putting the wheel in its place

wheel

The Wheel: Inventions and reinventions by Richard W. Bulliet (Columbia University Press), for New Scientist, 20 January 2016

IN 1870, a year after the first rickshaws appeared in Japan, three inventors separately applied for exclusive rights. Already, there were too many workshops serving the burgeoning market.

We will never know which of them, if any, invented this internationally popular, stackable, hand-drawn passenger cart. Just three years after its invention, the rickshaw had totally displaced the palanquin (a covered litter carried on the shoulders of two bearers) as the preferred mode of passenger transport in Japan.

What made the rickshaw so different from a wagon or an ox-cart and, in the eyes of many Westerners, so cruel, was the idea of it being pulled by a man instead of a farm animal. Pushing wheelchairs and baby carriages posed no problem, but pulling turned a man into a beast. “This quirk of perception,” Bulliet says, “reflects a history of human-animal relations that the Japanese – who ate little red meat, had few large herds of cattle and horses, and seldom used animals to pull vehicles – did not share with Westerners.”

In answer to some questions that seem far more difficult, Bulliet provides extraordinarily precise answers. He proposes an exact birth for the wheel: the wheel-set design, whereby wheels are fixed to rotating axles, was invented for use on mine cars in copper mines in the Carpathian mountains, perhaps as early as 4000 BC.

Other questions remain intractable. Why did wheeled vehicles not catch on in pre-Columbian America? The peoples of North and South America did not use wheels for transportation before Christopher Columbus arrived. They made wheeled toys, though. Cattle-herding societies from Senegal to Kenya were not taken in by wheels either, though they were happy enough to feature the chariots of visitors in their rock paintings.

Bulliet has a lot of fun teasing generations of anthropologists, archaeologists and historians for whom the wheel has been a symbol of self-evident utility: how could those foreign types not get it? His answer is radical: the wheel is actually not that great an idea. It only really came into its own once John McAdam, a Scot born in 1756, introduced a superior way to build roads. It’s worth remembering that McAdam insisted the best way to manufacture the small, sharp-edged stones he needed was to have workers, including women and children, sit beside the road and break up larger rocks. So much for progress.

The wheel revolution is, to Bulliet’s mind, a recent and largely human-powered one. Bicycles, shopping carts, baby strollers, dollies, gurneys and roll-aboard luggage: none of these was conceived before 1800. At the dawn of Europe’s Renaissance, in the 14th century, four-wheeled vehicles were not in common use anywhere in the world.

Bulliet ends his history with the oddly conventional observation that “invention is seldom a simple matter of who thought of something first”. He could have challenged the modern shibboleth (born in Samuel Butler’s Erewhon and given mature expression in George Dyson’s Darwin Among the Machines) that technology evolves. Add energy to an unbounded system, and complexity is pretty much inevitable. There is nothing inevitable about technology, though; human agency cannot be ignored. Even a technology as ubiquitous as the wheel turns out to be a scrappy hostage to historical contingency.

I may be misrepresenting the author’s argument here. It is hard to tell, because Bulliet approaches the philosophy of technology quite gingerly. He can afford to release the soft pedal. This is a fascinating book, but we need more, Professor Bulliet!

 

 

 

The disaster of the cloud itself

cloud

Tung-Hui Hu’s A Prehistory of the Cloud reviewed for New Scientist

LAST week, to protect my photographs of a much-missed girlfriend, I told all my back-up services to talk to each other. My snaps have since been multiplying like the runaway brooms in Disney’s Fantasia, and I have spent days trying to delete them.

Apart from being an idiot, I got into this fix because my data has been placed at one invisible but crucial remove in the cloud, zipping between energy-hungry servers scattered across the globe at the behest of algorithms I do not understand.

By duplicating our digital media to different servers, we insure against loss. The more complex and interwoven these back-up systems become, though, the more insidious our losses. Sync errors swallow documents whole. In the hands of most of us, JPEGs degrade a tiny bit each time they are saved. And all formats fall out of fashion eventually.

“Thus disaster recovery in the cloud often protects us against the disaster of the cloud itself,” says Tung-Hui Hu, a former network engineer whose A Prehistory of the Cloud poses some hard questions of our digital desires. Why are our commercial data centres equipped with iris and palm recognition systems? Why is Stockholm’s most highly publicised data centre housed in a bunker originally built to defend against nuclear attack?

Hu identifies two impulses: “First, a paranoid desire to pre-empt the enemy by maintaining vigilance in the face of constant threat, and second, a melancholic fantasy of surviving the eventual disaster by entombing data inside highly secured data vaults.”

The realm of the cloud does not countenance loss, but when we touch it, we corrupt it. The word for such a system – a memory that preserves, encrypts and mystifies a lost love-object – is melancholy. Hu’s is a deeply melancholy book and for that reason, a valuable one.

How we see now

 

mg22630260.800-1_800

For New Scientist, a review of Nicholas Mirzoeff’s book How to See the World

NICHOLAS MIRZOEFF, a media, culture and communication professor at New York University, wants to justify the study of visual culture by describing, accessibly, how strange our visual world has become.

This has been done before. In 1972 artist and writer John Berger made Ways of Seeing, a UK TV series and a book. This was also the year that astronaut Harrison Schmitt took the Blue Marble picture of Earth from Apollo 17, arguably the most reproduced photograph ever.

By contrast, in How to See the World, Mirzoeff’s mascot shot is the selfie taken by astronaut Akihiko Hoshide during his 2012 spacewalk. This time, Earth is reflected in Hoshide’s visor: the planet is physically different and changing fast. Transformations that would have been invisible to humans because they took place so slowly now occur in a single life. “We have to learn to see the Anthropocene,” writes Mirzoeff.

Images are ubiquitous, and we have learned to read them as frames in a giant, self-assembling graphic novel. Visual meaning is found in the connections we make between those images. We used to flock to the cinema for that sort of peculiar dream logic, but now we struggle to awaken. Mirzoeff cites artist Clement Valla writing that “we are already in the Matrix”.

Simple iconography is in retreat. During the 1962 Cuban missile crisis, Soviet missile trailers were visible in photos shown to the media. By 2003, the photos that US general Colin Powell showed of supposed weapons-making kit were lathered in yellow labelling, claiming to show what we could not in fact see.

Tracing the political, social and environmental implications of our visual culture, in words and black and white images, is a job of work. Mirzoeff succeeds: this is a dizzying and delightful book.