Can Disgust Be Anger for Kids?

Sherri Widen

Imagine you and a 2-year-old child are watching TV.  In the show, a man discovers that his soup contains sheep’s eyeballs.  You think to yourself, “Wow, that guy is really disgusted!”  The child says, “Wow, that guy is really mad!”  You are confident that, in fact, the guy is disgusted.  Does that mean that the child is wro2462987456_c9d17a5539_zng?  Most people assume that children and adults understand emotions in very similar ways.  But as this example shows, that may not be the case.

Although children begin using emotion words in conversation before the age of 2 and have a wide emotion vocabulary before the age of 5 years, studies of children’s use of emotion words find that they initially have two broad emotion categories: one for positive emotions and one for negative ones.  For example, 2-year-olds have been asked to say how people with different facial expressions feel.  The 2-year-olds used angry for facial expressions of anger, disgust, and sadness but not for facial expressions of happiness, surprise, or fear.  So, for young children, angry is a much broader category than it is for adults.  Older preschoolers are less likely to use angry for sadness facial expressions but it is not until children are at least 9 years old that they stop using angry for the disgust facial expression.

How do children go from two broad emotion categories (positive vs. negative) to more specific, adult-like categories?  In answering this question, it is helpful to think of emotions as “scripts” which include causes, consequences, and so on: for disgust, a person smells something foul (cause), wrinkles his or her nose (facial expression), covers his or her nose (behavior), and tries to get away from the source (consequence).  Which of these parts of the script might help children first understand that their broad negative emotion category is composed of distinct emotions?  From among all the causes, consequences, behaviors, etc., children need to notice that some things tend to co-occur.  For disgust, causes may provide that initial clue (eating or smelling something awful).  By 3 years, children know both the causes and words for disgust but it is not until they are much older that they connect the facial expression to the other parts of the disgust script.  In contrast, for sadness, by 4 years of age, children have connected the causes, consequences, facial expressions, and labels of the script.

As children move from preschool-aged to middle childhood, they learn about a wider variety of emotions, such as embarrassment, pride, and shame.  Just as younger children initially understand emotions like sadness, anger, and disgust in terms of positive vs. negative emotions, older children initially understand embarrassment, pride, and shame as a part of emotion categories that they already have.  Children (4-10 years) were asked to say how people felt when shown facial expression or told brief stories describing situations that cause these emotions.  Younger children labeled anger, contempt, disgust, and shame as angry and they labeled embarrassment as sad.  Gradually, children distinguished among the emotions and the oldest children used the expected label for all emotions (except contempt, which they labeled as angry).

So, when the 2-year-old in the sheep’s-eyeball-soup example we began with said that the man was angry, she was not wrong.  Within her understanding of emotions, the man was experiencing a negative emotion and her word for negative emotions is angry.  This response represents her current level of emotion understanding but it is also an opportunity for you to teach her something new – what disgust is.  A variety of school-based interventions work to explicitly teach children about emotions and to increase their emotion vocabulary and social skills.  Children are ready to learn about emotions and children who participate in these interventions develop stronger social and emotional skills and have improved grades than children who do not.

 Photo credit: Photo sourced from flickr via Creative Commons License https://flic.kr/p/4KDsmY

How feelings guide threat perception

JolieWormwood

“I was scared to death. The last thought I had go through my mind when I pulled the trigger, and I’ll never forget this… was that I was too late. I was too late. And because of that, I was gonna get killed. Worse, my (partner) was gonna get killed” –Officer Bron Cruz

“[He] had the most intense aggressive face. The only way I can describe it, it looks like a demon, that’s how angry he looked.” – Officer Darren Wilson

gun

Lately, policy makers, law enforcers, psychologists and lay people alike have wondered what makes officers shoot and kill young men who are ultimately unarmed. The above quotes, both from police officers who shot and killed unarmed youths in 2014, suggest that emotions play a role in the decision to use deadly force in the field. Of course, it is easy to imagine that having to make a split second, life-or-death decision would cause a person to experience heightened negative emotions such as anger or fear. Yet in my research, my colleagues and I hypothesized that the reverse might also be true: that heightened emotions can causally impact how we perceive and act in potentially life-threatening situations.

In a set of experiments from 2010, Dave DeSteno and I examined whether the experience of particular emotions influenced threat perception.  We first elicited experiences of different emotional states like anger and happiness in participants by having them recall and write about times in their lives that they remembered experiencing a given emotion very strongly. This made participants re-experience those specific emotions. We then had participants complete a simple video game involving the detection of guns. In each trial of our video game, participants were shown a series of background scenes (e.g., a park, a subway station) and then a Caucasian male appeared in the final background scene holding either an everyday object (e.g., a wallet, a soda can) or a hand gun. Participants were given less than a second to identify whether each individual was holding a gun or a neutral object.

Results from five separate experiments revealed that participants made to experience anger exhibited biased threat perception: they were more likely to perceive that the target individual was holding a gun as opposed to a non-threatening object. By contrast, participants in a more neutral mood did not exhibit any bias in threat perception. Importantly, this effect was not related to experiencing just any heightened emotional state. Threat perception performance was not influenced by the experience of a highly activated positive emotion (happiness), a lowly activated negative emotion (sadness), or even another highly activated negative emotion (disgust). It appears that the emotion being experienced needs to be applicable to judgments about potential violent or aggressive threats in order for participants to draw upon that feeling as a source of information for the judgment.

We also demonstrated that the effect of anger on threat perception bias was driven by anger’s impact on participants’ expectancies for encountering threats. That is, participants made to feel angry actually expected that they would encounter more guns in the threat perception task than did participants in a more neutral mood, and it was this heightened expectancy for guns that drove the observed differences in perception.

So how can this research be applied to the real world? Our findings suggest that emotions play a critical role as police officers and other law enforcers weigh life or death decisions in the field. Fortunately, our findings also suggest a viable means for intervention. We found that we could eliminate the effects of anger on biased perceptions by telling angry participants how many guns they should actually expect to see in the experiment. When participants knew that the average target wasn’t likely to be carrying a gun, they were less likely to let their anger bias their perception.  This suggests that providing law enforcement personnel better information about the actual likelihood of threats in their area may help them make snap decisions accurately, increasing efficacy while reducing collateral harm.

 

photo credit: https://www.flickr.com/photos/bnorthern/115132826/

Not everyone sees emotion like you do

Maria Gendron

If you have ever traveled abroad without speaking the language, or encountered someone doing so, you are probably familiar with the difficulty of communicating. In most of these instances, people will still try to get meaning across (and it can sure get weird). Unhelpfully, people will sometimes raise their voice or speak very slowly.  Perhaps more helpfully, people will sometimes attempt non-verbal communication. This might be as simple as a gesture (thumbs-up!), a facial expression (smile) or even a vocalization (laughter). But how much of the original intent actually comes across? People well versed in the ins and outs of “cultural competency” might be aware that overt gestures don’t always carry the same meaning across cultures. For example, a thumbs-up is a positive sign in the United States and parts of Europe, but is an offensive sign in other parts of the world like Iran and countries in West Africa. But might it also be the case that our facial expressions (frowns, scowls, etc.) and vocalizations (sighs, shrieks, etc.) also don’t translate?

The question of how much non-verbal behaviors (such as facial expressions or vocalizations) communicate across cultures has been a topic of scientific interest for many decades. Some researchers have proposed that the nuance of heartache or a deep-seated fear can be communicated non-verbally, regardless of whether the two people share culture or language. This is a universality view of emotion expression and perception, and is based on classic research conducted in the 1960s and 1970s that is taught widely in psychology and beyond. But newer research suggests that Western emotion categories such as anger and fear might not be as biologically basic as was previously thought. This research prompted my colleagues, Lisa Feldman Barrett, Debi Roberson, Marieta van der Vyver, and I to take another look at this question.

The challenge with testing the universality of emotions is that most people in the world are connected now. We are exposed to media from around the globe and there has been a particular proliferation of Western media outward. To be extremely meticulous in testing questions of universality, a small subset of researchers have opted to conduct testing in relatively “remote” societies. So my colleagues and I did just that. We traveled to a remote area of Northwestern Namibia, near the border of Angola to test people from the Himba ethnic group. The Himba are semi-nomadic pastoralists and exist largely outside the political and economic systems of Namibia. Key for us is that most people from the Himba culture are not exposed to other groups.testing3

What we wanted to know was simple: Do people from the Himba ethnic group perceive the same emotional message from Western facial expression and vocalizations that people tested in the United States do? What we found is that much of the emotional message is lost across cultures, but not all. Himba perceivers understood whether a Western facial expression or vocalization was signaling a pleasant (happy) or unpleasant (anger, disgust, fear, sadness) state. We also found that Himba perceivers could often understand whether someone was worked up or not (what we call “arousal” in the science of emotion). Himba perceivers tended to see facial expressions in their own unique ways, however. In a task where Himba perceivers were asked to sort pictures of facial expressions into piles by feeling, they tended to group faces into piles that were “behaving” the same way. For example, all the people who appeared to be looking at something were grouped together. Sometimes these people were making a Western expression of fear, but sometimes they were making a different Western expression. When Himba participants were asked to identify the meaning of voices, we found a similar pattern of findings.  The Himba perceived that vocal expressions meant that people were calling out for help, or playing. This research revealed that not only do non-verbal “signals” not have a preserved universal meaning, but the way people make sense of non-verbals varies culturally.  Whereas Himba perceivers were focused on behavior (they think someone is “looking at something”), perceivers from the United States focus on internal emotions (they think someone feels “happy” or “fearful”).

So what are the potential consequences of cultural differences in emotion perception? These findings certainly suggest that we should think twice about “importing” Western non-verbal behaviors into other cultural contexts and assuming they will “work”. While this may seem like a trivial example for those of us who don’t interact with people from a broad array of cultures often, let me remind you why this matters. To do that, I’ll leave you with this. The psychologist Triandis speculated that the start the gulf war was likely based on cultural misunderstanding of emotion, succinctly described below by Carnevale & Choi:

“In January 1991, James Baker, then the United States Secretary of State, met with Tariq Aziz, the foreign minister of Iraq. They met in an effort to reach an agreement that would prevent a war. Also present in the room was the half-brother of Saddam Hussein, whose role included frequent calls to Hussein with updates on the talks. Baker stated, in his standard calm manner, that the US would attack if Iraq did not move out of Kuwait. Hussein’s half-brother heard these words and reported that `the Americans will not attack. They are weak. They are calm. They are not angry. They are only talking.’ Six days later Iraq saw Desert Storm and the loss of about 175,000 of their citizens.”

How we perceive emotions in others could be the difference between peace and war.

 photo credit: Dr. Maria Gendron

Would an emotion by another name look the same?

Kristen Lindquist

In the blink of an eye, people see emotions unfold on others’ faces, and this allows them to successfully navigate the social world. For instance, when we see a scowl begin to unfold on a colleague’s face, we instantly understand the depth of his rage. A brief turn up of a friend’s lips transforms her face into the picture of happiness. Detecting a stranger’s widened eyes and a gaping mouth alerts us that something in the environment is not quite right. Indeed, most of us can see these emotions in the others around us with the greatest of ease, as if we are reading words on a page. The clear utility and ease of perceiving facial expressions of emotions has led many prominent researchers to conclude that information on the face is itself sufficient to automatically trigger a perception of “anger,” “happiness,” or “fear.” Yet growing research calls into question the idea that emotion perception proceeds in this simplistic and automatic manner.

My colleagues Lisa Feldman Barrett, Maria Gendron and I have been wondering for some time if emotion perception is perhaps not quite as simple as it seems. We’ve hypothesized that people actually learn to read emotions in other people over time, and that this process in part requires knowledge about different emotion concepts. The idea is, without knowing the word “anger,” you could never see a scowling person as angry. In a paper recently published in the journal Emotion, my co-authors and I tested this hypothesis in a rare group of patients who have a neurodegenerative disease called semantic dementia. Semantic dementia is caused when cells in areas of the brain that are critical to language die. As brain cells die, patients progressively become unable to understand the meaning of words and unable use words to categorize the world around them. We wondered if patients with this disorder would still be able to perceive specific emotions on faces, or whether their failure to use and understand the meaning of words would prevent them from understanding the specific meaning of emotional facial expressions.

To test this hypothesis, we gave three patients with semantic dementia a number of pictures of facial expressions and asked them to sort those facial expressions into as m490830281_a6da6da3fc_oany piles as they thought necessary. Notably, the task itself didn’t require words—patients weren’t required to match faces to words or to state words out loud or write down words to label the faces. Instead, patients could just freely sort the images into piles based on similarities in their appearances. Pictures included posed facial expressions of individuals who were scowling (angry), frowning (sad), wrinkling their noses up (disgusted), individuals with wide eyes (fearful), smiling (happy), and individuals who had relaxed, neutral facial muscles. We know that when healthy young adults perform a task like this, they produce roughly six piles for the six facial expressions in the set. Yet because semantic dementia typically impacts individuals who are 50+ years old, we first asked how a group of healthy older individuals performed on the facial expression picture sort task. Much like the younger adults, older adults created six or more piles to represent the six categories of facial expressions in the set of pictures. By contrast, when the patients with semantic dementia performed the sort task, they didn’t see the faces as instances of specific emotions. Instead, they sorted faces into piles of positive, negative and neutral expressions. As a testament to this fact, one patient attempted to label his piles (early on in the disorder, patients can still use some words, but increasingly lose the ability to do so over the course of their disease). This patient referred to his piles as people who felt “happy,” “rough” and “nothing.” These were the very few emotion words that the patient could still use, and he correspondingly sorted faces into piles that reflected these words. These findings suggest that without access to emotion words such as “anger,” “disgust,” “fear,” etc., individuals can only perceive the most basic of meaning in an emotional face—that is, whether the person is expressing something good, bad, or neutral.

These findings are consistent with some of our older research showing that temporarily impairing healthy young individuals’ access to the meaning of an emotion word impairs their ability to perceive emotion on faces. More broadly, our recent findings have implications for how scientists understand the nature of emotion perception. Rather than seeing emotion perception as a simplistic and automatic process that all individuals have the capacity to do, our findings underscore the importance of language in emotion perception. Our findings suggest that people with impaired language abilities, such as autistic individuals, might not only have problems with verbal communication, but also non-verbal communication. These findings suggest that counter-intuitively, an emotion by any other name might not look the same.

photo credit: https://www.flickr.com/photos/erikbenson/