Pig-philosophy

Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.

Prudery isn’t justice

Reading Objection: Disgust, morality, and the law by Debra Lieberman and Carlton Patrick for New Scientist, 15 September 2018

Ww want the law to be fair and objective. We also want laws that work in the real world, protecting and reassuring us, and maintaining our social and cultural values.

The moral dilemma is that we can’t have both. This may be because humans are hopelessly irrational and need a rational legal system to keep them in check. But it may also be that rationality has limits; trying to sit in judgement over everything is as cruel and farcical as gathering cats in a sack.

This dilemma is down to disgust, say Debra Lieberman, a psychologist at the University of Miami, and Carlton Patrick, a legal scholar at the University of Central Florida. In Objection, they join forces to consider why we find some acts disgusting without being reprehensible (like nose-picking), while others seem reprehensible without being disgusting (like drunk driving).

Disgust is such a powerful intuitive guide that it has informed our morality and hence our legal system. But it maps badly over a jurisprudence built on notions of harm and culpability.

Worse, terms of disgust are frequently wielded against people we intend to marginalise, making disgust a dangerously fissile element in our moral armoury.

Can science help us manage it? The prognosis is not good. If you were to ask a cultural anthropologist, a psychologist, a neuroscientist, a behavioural economist and a sociologist to explain disgust, you would receive different, often mutually contradictory, opinions.

The authors make their own job much more difficult, however, by endorsing a surreally naive model of the mind – one in which “both ’emotion’ and ‘cognition’ require circuitry” and it is possible to increase a child’s devotion to family by somehow manipulating this “circuitry”.

From here, the reader is ushered into the lollipop van of evolutionary psychology, where “disgust is best understood as a type of software program instantiated in our neural hardware”, which “evolved originally to guide our ancestors when making decisions about what to eat”.

The idea that disgust is to some degree taught and learned, conditioned by culture, class and contingency, is not something easily explored using the authors’ over-rigid model of the mind. Whenever they lay this model aside, however, they handle ambiguity well.

Their review of the literature on disgust is cogent and fair. They point out that although the decriminalisation of homosexuality and gay marriage argues persuasively for legal rationalism, there are other acts – like the violation of corpses – that we condemn without a strictly rational basis (the corpse isn’t complaining). This plays to the views of bioethicist Leon Kass, who calls disgust “the only voice left that speaks up to defend the central core of our humanity”.

Objection explores an ethical territory that sends legal purists sprawling. The authors emerge from this interzone battered, but essentially unbowed.