In the blink of an eye, people see emotions unfold on others’ faces, and this allows them to successfully navigate the social world. For instance, when we see a scowl begin to unfold on a colleague’s face, we instantly understand the depth of his rage. A brief turn up of a friend’s lips transforms her face into the picture of happiness. Detecting a stranger’s widened eyes and a gaping mouth alerts us that something in the environment is not quite right. Indeed, most of us can see these emotions in the others around us with the greatest of ease, as if we are reading words on a page. The clear utility and ease of perceiving facial expressions of emotions has led many prominent researchers to conclude that information on the face is itself sufficient to automatically trigger a perception of “anger,” “happiness,” or “fear.” Yet growing research calls into question the idea that emotion perception proceeds in this simplistic and automatic manner.
My colleagues Lisa Feldman Barrett, Maria Gendron and I have been wondering for some time if emotion perception is perhaps not quite as simple as it seems. We’ve hypothesized that people actually learn to read emotions in other people over time, and that this process in part requires knowledge about different emotion concepts. The idea is, without knowing the word “anger,” you could never see a scowling person as angry. In a paper recently published in the journal Emotion, my co-authors and I tested this hypothesis in a rare group of patients who have a neurodegenerative disease called semantic dementia. Semantic dementia is caused when cells in areas of the brain that are critical to language die. As brain cells die, patients progressively become unable to understand the meaning of words and unable use words to categorize the world around them. We wondered if patients with this disorder would still be able to perceive specific emotions on faces, or whether their failure to use and understand the meaning of words would prevent them from understanding the specific meaning of emotional facial expressions.
To test this hypothesis, we gave three patients with semantic dementia a number of pictures of facial expressions and asked them to sort those facial expressions into as many piles as they thought necessary. Notably, the task itself didn’t require words—patients weren’t required to match faces to words or to state words out loud or write down words to label the faces. Instead, patients could just freely sort the images into piles based on similarities in their appearances. Pictures included posed facial expressions of individuals who were scowling (angry), frowning (sad), wrinkling their noses up (disgusted), individuals with wide eyes (fearful), smiling (happy), and individuals who had relaxed, neutral facial muscles. We know that when healthy young adults perform a task like this, they produce roughly six piles for the six facial expressions in the set. Yet because semantic dementia typically impacts individuals who are 50+ years old, we first asked how a group of healthy older individuals performed on the facial expression picture sort task. Much like the younger adults, older adults created six or more piles to represent the six categories of facial expressions in the set of pictures. By contrast, when the patients with semantic dementia performed the sort task, they didn’t see the faces as instances of specific emotions. Instead, they sorted faces into piles of positive, negative and neutral expressions. As a testament to this fact, one patient attempted to label his piles (early on in the disorder, patients can still use some words, but increasingly lose the ability to do so over the course of their disease). This patient referred to his piles as people who felt “happy,” “rough” and “nothing.” These were the very few emotion words that the patient could still use, and he correspondingly sorted faces into piles that reflected these words. These findings suggest that without access to emotion words such as “anger,” “disgust,” “fear,” etc., individuals can only perceive the most basic of meaning in an emotional face—that is, whether the person is expressing something good, bad, or neutral.
These findings are consistent with some of our older research showing that temporarily impairing healthy young individuals’ access to the meaning of an emotion word impairs their ability to perceive emotion on faces. More broadly, our recent findings have implications for how scientists understand the nature of emotion perception. Rather than seeing emotion perception as a simplistic and automatic process that all individuals have the capacity to do, our findings underscore the importance of language in emotion perception. Our findings suggest that people with impaired language abilities, such as autistic individuals, might not only have problems with verbal communication, but also non-verbal communication. These findings suggest that counter-intuitively, an emotion by any other name might not look the same.
photo credit: https://www.flickr.com/photos/erikbenson/