Cutequake

Reading Irresistible by Joshua Paul Dale for New Scientist, 15 November 2023

The manhole covers outside Joshua Dale’s front door sport colourful portraits of manga characters. Hello Kitty, “now one of the most powerful licensed characters in the world”, appears on road-construction barriers at the end of his road, alongside various cute cartoon frogs, monkeys, ducks, rabbits and dolphins. Dale lives in Tokyo, epicentre of a “cutequake” that has conquered mass media (the Pokémon craze, begun in 1996, has become arguably the highest grossing media franchise of all time) and now encroaches, at pace, upon the wider civic realm. The evidence? Well for a start there are those four-foot-high cutified police-officer mannequins standing outside his local police station…

Do our ideas of and responses to cute have a behavioural or other biological basis? How culturally determined are our definitions of what is and is not cute? Why is the depiction of cute on the rise globally, and why, of all places, did cute originate (as Dale ably demonstrates) in Japan?

Dale makes no bones about his ambition: he wants to found a brand-new discipline: a field of “cute studies”. His efforts are charmingly recorded in this first-person account that tells us a lot (and plenty that is positive) about the workings of modern academia. Dale’s interdisciplinary field will combine studies of domestication and neoteny (the retention of juvenile features in adult animals), embryology, the history of art, the anthropology of advertising and any number of other disparate fields in an effort to explain why we cannot help grinning foolishly at hyper-simplified line drawings of kittens.

Cute appearances are merely heralds of cute behaviour, and it’s this behaviour — friendly, clumsy, open, plastic, inventive, and mischievous — that repays study the most. A species that plays together, adapts together. Play bestows a huge evolutionary advantage on animals that can afford never to grow up.

But there’s the sting: for as long as life is hard and dangerous, animals can’t afford to remain children. Adult bonobos are playful and friendly, but then, bonobos have no natural predators. Their evolutionary cousins the chimpanzees have much tougher lives. You might get a decent game of checkers out of a juvenile chimp, but with the adults it’s an altogether different story.

The first list of cute things (in The Pillow Book), and the first artistic depictions of gambolling puppies and kittens (in the “Scroll of Frolicking Animals”) come from Japan’s Heian period, running from 794 to 1185 – a four-century-long period of peace. So what’s true at an evolutionary scale seems to have a strong analogue in human history, too. In times of peace, cute encourages affiliation.

If I asked you to give me an example of something cut, you’d most likely mention a cub or kitten or other baby animal, but Dale shows that infant care is only the most emotive and powerful social engagement that cute can release. Cute is a social glue of much wider utility. “Cuteness offers another way of relating to the entities around us,” Dale writes; “its power is egalitarian, based on emotion rather than logic and on being friendly rather than authoritarian.”

Is this welcome? I’m not sure. There’s a clear implication here that cute can be readily weaponised — a big-eyed soft-play Trojan Horse, there to emotionally nudge us into heaven knows what groupthunk folly.

Nor, upon finishing the book, did I feel entirely comfortable with an aesthetic that, rather than getting us to take young people seriously, would rather reject the whole notion of maturity.

Dale, a cheerful and able raconteur, had written a cracking story here, straddling history, art, and some complex developmental science, and though he doesn’t say so, he’s more than adequately established that this is, after all, the way the world ends: not with a bang but a “D’awww!”

Pig-philosophy

Reading Science and the Good: The Tragic Quest for the Foundations of Morality
by James Davison Hunter and Paul Nedelisky (Yale University Press) for the Telegraph, 28 October 2019

Objective truth is elusive and often surprisingly useless. For ages, civilisation managed well without it. Then came the sixteenth century, and the Wars of Religion, and the Thirty Years War: atrocious conflicts that robbed Europe of up to a third of its population.

Something had to change. So began a half-a-millennium-long search for a common moral compass: something to keep us from ringing each other’s necks. The 18th century French philosopher Condorcet, writing in 1794, expressed the evergreen hope that empiricists, applying themselves to the study of morality, would be able “to make almost as sure progress in these sciences as they had in the natural sciences.”

Today, are we any nearer to understanding objectively how to tell right from wrong?

No. So say James Davison Hunter, a sociologist who in 1991 slipped the term “culture wars” into American political debate, and Paul Nedelisky, a recent philosophy PhD, both from the University of Virginia. For sure, “a modest descriptive science” has grown up to explore our foibles, strengths and flaws, as individuals and in groups. There is, however, no way science can tell us what ought to be done.

Science and the Good is a closely argued, always accessible riposte to those who think scientific study can explain, improve, or even supersede morality. It tells a rollicking good story, too, as it explains what led us to our current state of embarrassed moral nihilism.

“What,” the essayist Michel de Montaigne asked, writing in the late 16th century, “am I to make of a virtue that I saw in credit yesterday, that will be discredited tomorrow, and becomes a crime on the other side of the river?”

Montaigne’s times desperately needed a moral framework that could withstand the almost daily schisms and revisions of European religious life following the Protestant Reformation. Nor was Europe any longer a land to itself. Trade with other continents was bringing Europeans into contact with people who, while eminently businesslike, held to quite unfamiliar beliefs. The question was (and is), how do we live together at peace with our deepest moral differences?

The authors have no simple answer. The reason scientists keep trying to formulate one is same reason the farmer tried teaching his sheep to fly in the Monty Python sketch: “Because of the enormous commercial possibilities should he succeed.” Imagine conjuring up a moral system that was common, singular and testable: world peace would follow at an instant!

But for every Jeremy Bentham, measuring moral utility against an index of human happiness to inform a “felicific calculus”, there’s a Thomas Carlyle, pointing out the crashing stupidity of the enterprise. (Carlyle called Bentham’s 18th-century utilitarianism “pig-philosophy”, since happiness is the sort of vague, unspecific measure you could just as well apply to animals as to people.)

Hunter and Nedelisky play Carlyle to the current generation of scientific moralists. They range widely in their criticism, and are sympathetic to a fault, but to show what they’re up to, let’s have some fun and pick a scapegoat.

In Moral Tribes (2014), Harvard psychologist Joshua Greene sings Bentham’s praises:”utilitarianism becomes uniquely attractive,” he asserts, “once our moral thinking has been objectively improved by a scientific understanding of morality…”

At worst, this is a statement that eats its own tail. At best, it’s Greene reducing the definition of morality to fit his own specialism, replacing moral goodness with the merely useful. This isn’t nothing, and is at least something which science can discover. But it is not moral.

And if Greene decided tomorrow that we’d all be better off without, say, legs, practical reason, far from faulting him, could only show us how to achieve his goal in the most efficient manner possible. The entire history of the 20th century should serve as a reminder that this kind of thinking — applying rational machinery to a predetermined good — is a joke that palls extremely quickly. Nor are vague liberal gestures towards “social consensus” comforting, or even welcome. As the authors point out, “social consensus gave us apartheid in South Africa, ethnic cleansing in the Balkans, and genocide in Armenia, Darfur, Burma, Rwanda, Cambodia, Somalia, and the Congo.”

Scientists are on safer ground when they attempt to explain how our moral sense may have evolved, arguing that morals aren’t imposed from above or derived from well-reasoned principles, but are values derived from reactions and judgements that improve the odds of group survival. There’s evidence to back this up and much of it is charming. Rats play together endlessly; if the bigger rat wrestles the smaller rat into submission more than three times out of five, the smaller rat trots off in a huff. Hunter and Nedelisky remind us that Capuchin monkeys will “down tools” if experimenters offer them a reward smaller than that they’ve already offered to other Capuchin monkeys.

What does this really tell us, though, beyond the fact that somewhere, out there, is a lawful corner of necessary reality which we may as well call universal justice, and which complex creatures evolve to navigate?

Perhaps the best scientific contribution to moral understanding comes from studies of the brain itself. Mapping the mechanisms by which we reach moral conclusions is useful for clinicians. But it doesn’t bring us any closer to learning what it is we ought to do.

Sociologists since Edward Westermarck in 1906 have shown how a common (evolved?) human morality might be expressed in diverse practices. But over this is the shadow cast by moral skepticism: the uneasy suspicion that morality may be no more than an emotive vocabulary without content, a series of justificatory fabrications. “Four legs good,” as Snowball had it, “two legs bad.”

But even if it were shown that no-one in the history of the world ever committed a truly selfless act, the fact remains that our mythic life is built, again and again, precisely around an act of self- sacrifice. Pharaonic Egypt had Osiris. Europe and its holdings, Christ. Even Hollywood has Harry Potter. Moral goodness is something we recognise in stories, and something we strive for in life (and if we don’t, we feel bad about ourselves). Philosophers and anthropologists and social scientist have lots of interesting things to say about why this should be so. The life sciences crew would like to say something, also.

But as this generous and thoughtful critique demonstrates, and to quite devastating effect, they just don’t have the words.

Prudery isn’t justice

Reading Objection: Disgust, morality, and the law by Debra Lieberman and Carlton Patrick for New Scientist, 15 September 2018

Ww want the law to be fair and objective. We also want laws that work in the real world, protecting and reassuring us, and maintaining our social and cultural values.

The moral dilemma is that we can’t have both. This may be because humans are hopelessly irrational and need a rational legal system to keep them in check. But it may also be that rationality has limits; trying to sit in judgement over everything is as cruel and farcical as gathering cats in a sack.

This dilemma is down to disgust, say Debra Lieberman, a psychologist at the University of Miami, and Carlton Patrick, a legal scholar at the University of Central Florida. In Objection, they join forces to consider why we find some acts disgusting without being reprehensible (like nose-picking), while others seem reprehensible without being disgusting (like drunk driving).

Disgust is such a powerful intuitive guide that it has informed our morality and hence our legal system. But it maps badly over a jurisprudence built on notions of harm and culpability.

Worse, terms of disgust are frequently wielded against people we intend to marginalise, making disgust a dangerously fissile element in our moral armoury.

Can science help us manage it? The prognosis is not good. If you were to ask a cultural anthropologist, a psychologist, a neuroscientist, a behavioural economist and a sociologist to explain disgust, you would receive different, often mutually contradictory, opinions.

The authors make their own job much more difficult, however, by endorsing a surreally naive model of the mind – one in which “both ’emotion’ and ‘cognition’ require circuitry” and it is possible to increase a child’s devotion to family by somehow manipulating this “circuitry”.

From here, the reader is ushered into the lollipop van of evolutionary psychology, where “disgust is best understood as a type of software program instantiated in our neural hardware”, which “evolved originally to guide our ancestors when making decisions about what to eat”.

The idea that disgust is to some degree taught and learned, conditioned by culture, class and contingency, is not something easily explored using the authors’ over-rigid model of the mind. Whenever they lay this model aside, however, they handle ambiguity well.

Their review of the literature on disgust is cogent and fair. They point out that although the decriminalisation of homosexuality and gay marriage argues persuasively for legal rationalism, there are other acts – like the violation of corpses – that we condemn without a strictly rational basis (the corpse isn’t complaining). This plays to the views of bioethicist Leon Kass, who calls disgust “the only voice left that speaks up to defend the central core of our humanity”.

Objection explores an ethical territory that sends legal purists sprawling. The authors emerge from this interzone battered, but essentially unbowed.