What is the relationship between emotions and morality?

Daryl Cameron

Think about the last time someone did something immoral—killed, cheated, stole, had sex with someone or something they shouldn’t have—your immediate response was likely to be emotional: you probably felt negative and pretty “worked up” about it.  Although it was long assumed that our ability to make moral judgments had very little to do with our emotions, growing evidence now suggests the opposite is true.  Over two decades of research suggests that emotional states and moral judgments are intertwined.  In general, moral violations elicit negative emotions that in turn shape people’s reactions about the severity or importance of a moral infraction.  People who are more emotionally reactive tend to find moral violations to be more wrong and deserving of punishment, and clinical populations with diminished emotions (e.g., psychopaths) make moral judgments differently than healthy individuals.  Causing people to feel emotions ranging from disgust and anger to compassion and amusement can change their moral judgments (e.g., whether they think something is wrong) and moral behavior (e.g., whether they punish someone else).  Broadly, these findings support the writings of Scottish philosophers David Hume and Adam Smith: who you blame depends (in part) on how you feel.

Some have suggested that it might matter which emotion you’re currently feeling, insofar as some emotions might be more related to certain types of moral violations than others.  When someone steals a marble rye from an elderly lady, you might feel anger.  When someone has sex with a marble rye, you might feel disgust. In fact, many psychological theories suggest that there are unique pairings between specific emotions and specific types of morality.  These theories carve morality into many different types, such as harm, which involves causing suffering (e.g., killing someone), and purity, which involves bodily/spiritual defilement (e.g., “unnatural” sex).  Harm and purity are considered distinct, like taste buds for sweet and sour.  The question is: are certain “moral taste buds” uniquely responsive to certain emotions, just as certain taste buds are uniquely responsive to certain “basic tastes” such as sweet and sour?

3999966771_46dd120c46_zMany have claimed that harm violations are uniquely connected to the emotion of anger, and that purity violations are uniquely connected to the emotion of disgust.  Anger shouldn’t correlate with or influence moral judgments about purity violations, and disgust shouldn’t correlate with or influence moral judgments about harm violations.  In a recently published review with Kristen Lindquist and Kurt Gray, we examined the evidence for these specificity claims.  Examining the literature, we did not find support for these claims: instead, disgust and anger each relate to judgments about both harm and purity violations.

So what drives the relationship between emotions and moral judgment?  It may be that specific emotions don’t matter as much as feeling generally bad or “worked up.”  For instance, in a recent set of studies, people were made to feel one of a variety of emotions—including disgust, fear, sadness, and excitement—and then they judged harm and purity violations.  Compared to a neutral control condition, all of the emotion inductions made people judge moral violations more harshly, regardless of the induced emotion or the type of morality being judged.  Instead, what mattered was a more basic dimension that underlies specific emotions, called arousal: generally, how activated and worked up are you?  This finding has interesting implications for everyday moral judgment.  If being aroused and worked up increases moral condemnation, then you may want to reconsider making any important moral decisions after just waking up (when you feel lowly activated), going jogging (when you feel highly activated), or having that third cup of coffee (when you feel really highly activated).

This finding is consistent with a broader perspective on the mind called constructionism: mental states like emotions and moral judgments are built from more basic parts, just like a cake emerges from a recipe that combines flour, sugar, and eggs. For emotions and moral judgments, these ingredients include affect (how good/bad and aroused you feel) and your knowledge and memories.  Constructionism has been applied to explain emotions (for review, see here) and moral judgments (for review, see here).  Disgust and anger feel different, as do purity and harm, which is why it may seem natural for them to pair up with one another.  But research suggests that they don’t, and that basic ingredients like arousal may be equally, if not more, important in understanding emotions, morality, and the relationship between the two.  Returning to the taste bud analogy, distinct “moral taste buds” are not uniquely responsive to certain emotions.  Morality may be a matter of taste, but understanding the relationship between emotions and morality may benefit from constructionism as a new approach.

 

photo credit: https://www.flickr.com/photos/reisgekki/

Empathy is limited, if you want it to be

Daryl Cameron

This year, nearly 60,000 undocumented, unaccompanied children have crossed the southern border into the United States, creating a humanitarian crisis and fueling intense political debate. Should these children be granted moral and legal rights? Should we have empathy for their suffering?

When faced with crises this large, people often think of Mother Teresa’s line: “If I look at the mass, I will never act.” Large numbers of victims seem overwhelming and hard to think about, and are often treated as a cold statistic. This response seems to reveal a capacity limit on empathy: empathy is stronger for a single, identifiable victim (such as Baby Jessica, trapped in the well) than for large groups of victims (such as the thousands of border children). We appear to be numb to numbers. This deficit is striking because many people think that they would feel more empathy as the numbers increase, and that they have an obligation to do so. Empathy seems to fail when it is needed the most.2864579787_902f3bf723_z-2

Recently, many authors—including psychologist Paul Bloom, philosopher Jesse Prinz, and columnist David Brooks—have argued that because empathy is biased toward identifiable victims, it should not be trusted when making moral, legal, and policy decisions. As put by Bloom, “empathy is narrow; it connects us to particular individuals, real or imagined, but is insensitive to numerical differences and statistical data.” As put by Prinz, “we contribute more to a neighbor in need than to the thousands ravaged by a distant tsunami or the millions who die from starvation or disease… in making policy, we would be better off ignoring empathy.”

But what if the limits of empathy aren’t set in stone? A growing number of studies reveal that seemingly fixed limits on empathy may be malleable. Lapses of empathy may reflect motivated choices to avoid empathy. Empathy itself may not be the problem, as its critics suggest; the real problem may be in the choices people make to avoid empathy.

First, it is important to define empathy. When psychologists discuss limits of empathy, they typically mean “emotional empathy”: vicarious sharing of others’ experiences (i.e., “feeling with”—if you’re upset, I get upset). Empathy can lead to many responses, including compassion: an other-focused emotion that motivates pro-social behavior (i.e., “feeling for”—if you’re upset, then I want to relieve your suffering). Critics often focus on limits of empathy, but suggest that compassion is a good thing; however, the bias toward single victims emerges for both empathy and compassion. I have termed this effect “compassion collapse”, but it can occur for both empathy and compassion.

With those definitions in mind, much work reveals that limits in empathy may be motivated, not fixed. My favorite anecdote to explain this idea draws upon the famous commercial by the Society for the Prevention of Cruelty to Animals, starring Sarah McLachlan. The two-minute video (which you can choose to watch here) sets McLachlan’s rendition of “In the Arms of the Angel” to images of abused puppies and kittens. Although I strongly support animal rights, I find it difficult to watch the commercial and sometimes turn it off to avoid being emotionally exhausted by the suffering. It is precisely because I care about their welfare that the emotional cost is so high. Even Sarah McLachlan has said that she changes the channel when her commercial comes on to avoid being overwhelmed by her emotions.

This is an example of motivated emotion regulation: changing the situation to avoid costly empathy. But anecdotes are not data, and many psychology experiments support this idea. Two decades ago, one set of studies showed that people avoid hearing appeals for help that induce high empathy if they anticipate that helping will be costly. More recently, I examined whether motivated emotion regulation could explain compassion collapse. If people predict more emotion as the numbers increase, this may create concerns about financial and emotional costs, leading to empathy avoidance. In one experiment, we had participants read about one or eight child refugees in Darfur. Half of participants expected to donate money and the other half did not. When participants expected to donate, they tended to show more compassion for one victim than eight victims; but when this cost was removed, they felt more compassion for eight victims than one victim. Changing motivation to avoid empathy flipped the compassion collapse.  We also found that compassion collapse only emerged for skilled emotion regulators, suggesting that emotion regulation is necessary for the effect to emerge. Empathy and compassion seem insensitive to large numbers, but this may not be a capacity limit: instead, it may be due to strategic choices to regulate emotions.

In his article, Bloom notes that “experiencing others’ pain leads to exhaustion and burnout.” This emotional cost could motivate empathy regulation, and indeed, doctors appear to regulate empathy to avoid such costs. My lab is currently exploring how exhaustion costs lead us to dehumanize others. As put by Iowa Writers’ Workshop alumna Leslie Jamison in her response to Bloom, “if burnout and exhaustion are the dangers of too much empathy, then abstraction is the danger of too little.”

A motivated perspective on empathy is gaining traction in the field. Narcissists and psychopathic offenders tend to show less empathy for others, but instructing them to empathize reduces these deficits. What people think about empathy also matters: people who believe that empathy can be cultivated—as opposed to being fixed—exert more effort to feel empathy. This work parallels findings on self-control: whereas some claim that self-control is a limited-capacity resource, others reveal that self-control limits only emerge for people who believe that self-control is limited, suggesting that motivation has a role.

In short, let’s not blame empathy for the motivated choices that we make to avoid it. In some situations, empathy may only be as limited as we want it to be.