The Righteous Mind

July 2, 2015

Morality is more than just right and wrong: its the evolutionary result of a range of competing moral intuitions. The failure to appreciate this is a major cause of political partisanship and deadlock.

The Righteous Mind was my first introduction to the field of moral psychology, the study of human beliefs about right and wrong. Jonathan Haidt begins with some shocking empirical results that show we’re not nearly as rational in our moral opinions as we’d like to think, and follows up with well-argued (though ultimately theoretical) evolutionary explanations for why our moral minds work the way they do. Throughout, he applies the lessons of moral psychology to where their stakes are highest—the realm of politics—to help us understand the roots, and the way out, of entrenched partisanship. (Spoiler: If you think it’s the other side’s fault, it’s yours.)

I’d name The Righteous Mind the most important book I read in 2014, and it’s no coincidence that it was also the most challenging to my own views.

If you’re despairing—or just dog tired—of living in a country where the other half of folks are either stupid or crazy, then take heart: there’s a way out of that world, and it’s realizing that we don’t actually live in that world. Self-righteousness has a way of camouflaging itself by making it seem like those who disagree with you are the self-righteous ones. In truth, the last hundred-thousand years of human history made us all that way, but by recognizing how and why, we can begin to move past it.

“Intuitions come first, strategic reasoning second.”

Haidt begins by making the case, supported by his research, that the role of reason in our beliefs about moral issues is precisely the opposite of what we think. Reason is typically employed not to arrive at moral truth, but to convince others of the moral truth we’ve already taken for granted. In this way, our reasoning is less an impartial judge than a public-relations shill for our gut feelings. In the lab, human reason enters the picture only after our intuitions have made up our moral minds for us, and reason’s role is to grasp at the justifications for those intuitions which are most likely to convince other members of our species.

We are, of course, capable of impartial reasoning, but the conditions under which we engage in it are exceedingly rare. Haidt summarizes the research of Phil Tetlock, who studies accountability. Tetlock has found

two very different kinds of careful reasoning. Exploratory thought is an “evenhanded consideration of alternative points of view.” Confirmatory thought is “a one-sided attempt to rationalize a particular point of view.” Accountability increases exploratory thought only when three conditions apply: (1) decision makers learn before forming any opinion that they will be accountable to an audience, (2) the audience’s views are unknown, and (3) they believe the audience is well informed and interested in accuracy. When all three conditions apply, people do their darnedest to figure out the truth, because that’s what the audience wants to hear. But the rest of the time—which is almost all of the time—accountability pressures simply increase confirmatory thought.

That last part is key. It’s not just accountability to others that makes us more honest in our reasoning, but accountability to a well-informed group, some of whom may be inclined to disagree with us. This seems to me like a strong case for political dialogue. However reasonable we like to think we are, unless we’re justifying our opinions to an audience we don’t already know agrees with us, we’re likely just engaging in confirmatory reasoning, which serves many ends but never the truth. What’s more, I suspect this is why we find it so hard to argue constructively with our ideological opposites. It’s not because they’re unreasonable, but because we use the same hollow, “confirmatory” reasoning that we’re used to using with our fellow believers, reasoning that might better be described as a kind of social grooming.

Where Do Our Moral Intuitions Come From?

If our reasoning is typically employed to convince others of our moral intuitions, where do those intuitions come from? To answer this question, Haidt engaged in a series of experiments in which participants were faced with hypothetical moral quandaries and asked to justify what they thought was the right decision. Haidt and his colleagues then examined the justifications and found that they fall into any of six main categories, axes along which people judge right and wrong, which he dubbed “moral foundations.” Haidt also offers plausible (though fundamentally unverifiable) evolutionary explanations for why humans come pre-loaded with these moral software modules.

We all come pre-wired with these six moral faculties, though one’s personal experience and social group (including political parties) have a significant effect on which faculties get more weight. Haidt’s research mostly bears out the distributions you might expect. Self-identified progressives are most sensitive to care concerns, followed by liberty and fairness. Libertarians put liberty concerns above all others. But what surprised me was that conservatives tend to value all six moral foundations pretty equally.

This is the finding that should be most eye-opening to progressives like me. We’re so often baffled by conservatives not because they’re dumb, or crazy, or racist—but rather, because their moral spectrum is twice as broad as ours. In Haidt’s own words:

Liberals looked for psychological explanations of conservatism, but not liberalism. We supported liberal policies because we saw the world clearly and wanted to help people, but they supported conservative policies out of pure self-interest (lower my taxes!) or thinly veiled racism (stop funding welfare programs for minorities!) We never considered the possibility that there were alternative moral worlds in which reducing harm (by helping victims) and increasing fairness (by pursuing group-based equality) were not the main goals. And if we could not imagine other moralities, then we could not believe that conservatives were as sincere in their moral beliefs as we were in ours.

These additional moral dimensions undermine the Progressive sport (with which any Daily Show viewer will be familiar) of pointing out contradictions in conservatives’ positions, for example that someone can be both pro-life and anti-gun control. This is a contradiction only when judged along a single moral axis, that of care vs. harm. If progressives see a contradiction here, it’s only because they’re imposing their own, especially narrow moral standard. For a conservative, abortion spans several moral axes: care/harm to be sure, but also fairness/cheating (seeing abortion as a way of avoiding the proper consequences of one’s behavior), sanctity/degradation (the sanctity of human life), and authority/subversion (for those who submit to the authority of the church). Gun control is likewise more morally nuanced than many progressives hold: it touches care/harm concerns, but for someone who knows they’re not going to harm anyone, the larger moral concern is liberty/oppression (seeing progressives as imposing their will on the group). Thus, from the perspective of a full six-foundation morality, there’s no contradiction.

Of course, Haidt isn’t arguing that we should all be conservatives. We may have legitimate reasons to favor some moral foundations over others. In the 21st century with our modern sanitation and refrigeration, it’s easy to see the sanctity/degradation foundation as vestigial, and so it shouldn’t compete with other, more relevant moral concerns. But the main point is that morality is a rich canvas, and progressives must learn to see all of its colors as conservatives do if we ever hope to appear relevant to them.

Interestingly, Haidt’s research shows that conservatives understand liberals far better than liberals understand conservatives. In one study, subjects were asked to self-identify politically and to respond to moral questions as if they were from the other side of the aisle.

Who was best able to pretend to be the other? The results were clear and consistent. Moderates and conservatives were most accurate in their predictions, whether they were pretending to be liberals or conservatives. Liberals were the least accurate, especially those who described themselves as “very liberal.” The biggest errors in the whole study came when liberals answered the Care and Fairness questions while pretending to be conservatives. When faced with questions such as “One of the worst things a person could do is hurt a defenseless animal” or “Justice is the most important requirement for a society,” liberals assumed that conservatives would disagree. If you have a moral matrix built primarily on intuitions about care and fairness (as equality), and you listen to the Reagan narrative, what else could you think? Reagan seems completely unconcerned about the welfare of drug addicts, poor people, and gay people. He’s more interested in fighting wars and telling people how to run their sex lives.

I think that’s worth reiterating: conservatives understand liberal concerns quite well, they just see additional concerns worth considering which liberals don’t. I’ll remember this the next time my political intuitions nudge me to see conservatives as the narrow-minded ones…

Moral Capital

So what of the three moral foundations we progressives tend to ignore—loyalty/betrayal, authority/subversion, and sanctity/degradation? It turns out there’s good reason at least to consider these. Haidt introduces the notion of moral capital, the glue which binds societies together. Change too much and too quickly, and people begin to feel less a part of society, less beholden to its rules and expectations. Taken to its extreme, societies devolve into anarchy.

If you are trying to change an organization or a society and you do not consider the effects of your changes on moral capital, you’re asking for trouble. This, I believe, is the fundamental blind spot of the left. It explains why liberal reforms so often backfire, and why communist revolutions usually end up in despotism. It is the reason I believe that liberalism—which has done so much to bring about freedom and equal opportunity—is not sufficient as a governing philosophy. It tends to overreach, change too many things too quickly, and reduce the stock of moral capital inadvertently. Conversely, while conservatives do a better job of preserving moral capital, they often fail to notice certain classes of victims, fail to limit the predations of certain powerful interests, and fail to see the need to change or update institutions as times change.

The importance of the loyalty/betrayal, authority/subversion, and sanctity/degradation moral foundations is the way they tend to preserve moral capital:

A more positive way to describe conservatives is to say that their broader moral matrix allows them to detect threats to moral capital that liberals cannot perceive. They do not oppose change of all kinds (such as the Internet), but they fight back ferociously when they believe that change will damage the institutions and traditions that provide our moral exoskeletons (such as the family). Preserving those institutions and traditions is their most sacred value.

Change, even change for the better, has a cost in moral capital. That doesn’t mean that change shouldn’t happen, but it does mean that it can only go so far and so fast without fraying the fabric of society. History shows us what happens when societies spend down their balance of moral capital before it’s had a chance to replenish, and it’s pretty much always ugly.