Would an emotion by another name look the same?

Kristen Lindquist

In the blink of an eye, people see emotions unfold on others’ faces, and this allows them to successfully navigate the social world. For instance, when we see a scowl begin to unfold on a colleague’s face, we instantly understand the depth of his rage. A brief turn up of a friend’s lips transforms her face into the picture of happiness. Detecting a stranger’s widened eyes and a gaping mouth alerts us that something in the environment is not quite right. Indeed, most of us can see these emotions in the others around us with the greatest of ease, as if we are reading words on a page. The clear utility and ease of perceiving facial expressions of emotions has led many prominent researchers to conclude that information on the face is itself sufficient to automatically trigger a perception of “anger,” “happiness,” or “fear.” Yet growing research calls into question the idea that emotion perception proceeds in this simplistic and automatic manner.

My colleagues Lisa Feldman Barrett, Maria Gendron and I have been wondering for some time if emotion perception is perhaps not quite as simple as it seems. We’ve hypothesized that people actually learn to read emotions in other people over time, and that this process in part requires knowledge about different emotion concepts. The idea is, without knowing the word “anger,” you could never see a scowling person as angry. In a paper recently published in the journal Emotion, my co-authors and I tested this hypothesis in a rare group of patients who have a neurodegenerative disease called semantic dementia. Semantic dementia is caused when cells in areas of the brain that are critical to language die. As brain cells die, patients progressively become unable to understand the meaning of words and unable use words to categorize the world around them. We wondered if patients with this disorder would still be able to perceive specific emotions on faces, or whether their failure to use and understand the meaning of words would prevent them from understanding the specific meaning of emotional facial expressions.

To test this hypothesis, we gave three patients with semantic dementia a number of pictures of facial expressions and asked them to sort those facial expressions into as m490830281_a6da6da3fc_oany piles as they thought necessary. Notably, the task itself didn’t require words—patients weren’t required to match faces to words or to state words out loud or write down words to label the faces. Instead, patients could just freely sort the images into piles based on similarities in their appearances. Pictures included posed facial expressions of individuals who were scowling (angry), frowning (sad), wrinkling their noses up (disgusted), individuals with wide eyes (fearful), smiling (happy), and individuals who had relaxed, neutral facial muscles. We know that when healthy young adults perform a task like this, they produce roughly six piles for the six facial expressions in the set. Yet because semantic dementia typically impacts individuals who are 50+ years old, we first asked how a group of healthy older individuals performed on the facial expression picture sort task. Much like the younger adults, older adults created six or more piles to represent the six categories of facial expressions in the set of pictures. By contrast, when the patients with semantic dementia performed the sort task, they didn’t see the faces as instances of specific emotions. Instead, they sorted faces into piles of positive, negative and neutral expressions. As a testament to this fact, one patient attempted to label his piles (early on in the disorder, patients can still use some words, but increasingly lose the ability to do so over the course of their disease). This patient referred to his piles as people who felt “happy,” “rough” and “nothing.” These were the very few emotion words that the patient could still use, and he correspondingly sorted faces into piles that reflected these words. These findings suggest that without access to emotion words such as “anger,” “disgust,” “fear,” etc., individuals can only perceive the most basic of meaning in an emotional face—that is, whether the person is expressing something good, bad, or neutral.

These findings are consistent with some of our older research showing that temporarily impairing healthy young individuals’ access to the meaning of an emotion word impairs their ability to perceive emotion on faces. More broadly, our recent findings have implications for how scientists understand the nature of emotion perception. Rather than seeing emotion perception as a simplistic and automatic process that all individuals have the capacity to do, our findings underscore the importance of language in emotion perception. Our findings suggest that people with impaired language abilities, such as autistic individuals, might not only have problems with verbal communication, but also non-verbal communication. These findings suggest that counter-intuitively, an emotion by any other name might not look the same.

photo credit: https://www.flickr.com/photos/erikbenson/

 

Early Explorations of the Final Frontier: The Human Brain

Eliza Bliss-Moreau

Yesterday, that National Institutes of Health (NIH) announced the awarding of $46 million as part of the new BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies).  The BRAIN Initiative is a multi-agency funding program aimed at developing the technologies necessary to map the functions of the brain. We’ve learned a lot about the brain over the last century, but there’s so much more to learn that many neuroscientists consider the brain “the final frontier”.  Dr. Francis Collins, Director of the NIH, has likened the BRAIN Initiative to President Kennedy’s race to the moon. In a world of discovery where we use methods like optogenetics, positron emission topography, magnetic resonance imaging, DREADDs (designer receptors exclusively activated by designer drugs), electrocorticography (which we’ll discuss in future posts) to understand how areas and circuits of the brain work and how they contribute to emotion, it’s easy to forget how far the neuroscience of emotion has come in the last half-century or so.  And, of course, it’s also easy to forget how far we have to go.https://flic.kr/p/off1YF

The goal of today’s post is to take a *very* brief walk back down memory lane to remember whence we’ve come (circa early 20th century) and the lessons from the pioneers of the neuroscience of emotion that we should carry with us as we continue to explore the brain.

Before we could look into the human brain using neuroimaging, our knowledge of how the human brain functioned was largely made possible by people with diseases or injury. In some cases, people got tumors that required surgery.  When the tumor was removed, a given brain area was disturbed.  Changes in the person’s behavior following surgery were therefore logically linked to the damage. Sometimes people had strokes or aneurysms that damaged the brain.  Sometimes people had injuries that damaged the brain (e.g., Phineas Gage).  In yet other cases, surgery was performed on the brain to alleviate epilepsy or psychological illness. One important point to keep in mind is that regardless of the cause of damage, studies of these sorts were not studies of the healthy, normal human brain; they were studies of the diseased or injured brain.  It wasn’t until different sorts of neuroimaging or recording techniques arrived on the scene as methodological tools that we were able to evaluate the healthy human brain.

Damage that occurred because of a tumor, a stroke, or injury most often crossed multiple anatomical areas somewhat randomly.  This made it challenging to conclude which psychological functions were generated by which brain areas. But damage that occurred to alleviate illness was typically targeted, or what neuroscientists call “focal”. Studying people with this sort of damage allowed the pioneers of emotion neuroscience some of the first glimpses into the role of particular brain areas in the generation of emotions. By combining observations from the clinic with animal studies in which comparable brain damage was made or in which regions of the brain were electrically stimulated (which will be discussed in a future post), the neuroscience of emotion was propelled forward.

These early emotion neuroscientists—men like Cannon, Papez, and MacLean—used what their fairly rudimentary tools (by modern standards) to reveal some truths about emotion that still ring true today:

Emotions don’t live in particular areas of the brain but rather came to be via distributed circuitry throughout the whole brain. The expression, experience, and perception of emotion are made possible by slightly different circuitry.  Emotion comes to be in part via the activity in the peripheral nervous system—that is, we “feel” emotions in our bodies.  Certain brain areas are “hubs” of activities—central areas much like bus terminals in a big city where lots of signals arrive and are subsequently transmitted to other areas for further processing.

For examples classic early papers see here, here, here, and here.

Over the years, these important messages that stand the test of time (and modern methods) were often lost in attempts to localize particular emotions to particular neural regions.  [The most pervasive of the localization hypotheses is that the amygdala is the locus of fear.  The hypothesis is so pervasive and the evidence to support it so lacking, that we’ll take on that idea in another full post.] Localization attempts focused on mapping discrete emotions to discrete neural structures, often relying on poor operationalization of emotion related variables.

An early scientist might label a phenomena “rage” without ever defining what rage actually was (By answering questions like: are we talking about the perception of rage? The expression of rage? The experience of rage? How do we tell the difference between rage and anger? If this phenomena is being observed in animals how do we know that it maps on to the human experience of rage?) or indicating a specific way to measure it. Another important, often overlooked point is that changes to emotion observed in these human patients were typically described in diffuse, nonspecific terms—for example a patient’s psychiatry symptoms might have “improved” following surgery.  Improvement in anxiety or depression symptomology, for example, were taken as evidence that particular brain areas removed during psychosurgery were involved in emotion.  Advancement of emotion neuroscience requires carefully defining emotion terms and characterizing emotion phenomena in ways that can be systematically measured.

Despite these caveats, the lessons from early studies of the emotional brain are powerful and not to be ignored as we enter an era of the BRAIN Initiative—careful experimentation and measurement can yield impressive gains in knowledge, even with fairly rudimentary tools.

 Photo credit:  https://flic.kr/p/off1YF