Emoting online – more than just smileys

Lisa Williams

Social media: love it or hate it, it’s here to stay. Speaking of love and hate, a surge of recent research has tackled core questions regarding emotional processes as they play out on social media.

How do we e-communicate our emotional states? Emoticons, and their more graphical cousins emoji, are one popular route. Since the 1980’s, online communicators have been using combinations of punctuation marks to convey sarcasm or a joking tone (e.g., “bugger off!” and “bugger off :-)” certainly convey different meanings). Recently, social psychologist Dacher Keltner teamed up with folks at Pixar to develop a ‘sticker’ set of animated emoji called Finch. Finches were designed to reflect the great variety of emotional experience that simply can’t be captured with semicolons, parentheses, and dashes, including love, sympathy, awe, jealousy, and embarrassment.

Not only are Finch emoji quite popular, analysis of their use presented by Dacher Keltner at the February meeting of the Society of Personality and Social Psychology reveals fascinating trends around the world. Use of the ‘loving’ Finch is highly frequent in Russia and Mediterranean regions; use of the 93571524_43b1e4070f_o‘sympathetic’ Finch is highly frequent in Australia and the Americas. What can emotion communication tell us about a culture? Apparently, quite a bit.

Emotion communication in social media is of course not limited to emoticons and emoji. The words we use also convey how we felt about a past event, currently feel at the moment, or anticipate feeling in the future. Linguistic analysis of Facebook and Twitter posts reveals a great deal about users’ emotions. An intriguing interface at the World Wellbeing Project (www.wwbp.org) allows visitors to track word usage across age groups, including, but not limited to, words with emotional tone. Analyses are based on over 75,000 Facebook users. My own cursory analysis revealed that older users use ‘grateful’ more often and ‘angry’ less often than their younger counterparts. So, the informative nature of emotional e-communication isn’t just cultural – emotive language online varies according to age groups, genders, and personality traits.

It turns out that online emotive language is not just descriptive – it can serve as an indicator of a community’s level of wellbeing. Analysis of 148 million Twitter posts conducted by a team led by Johannes C. Eichstaedt revealed that communities whose residents tweet with angry language are communities high in risk for mortality from atherosclerotic heart disease. In fact, language on Twitter did a better job of predicting cardiac disease mortality than a set of 10 predictors including demographics (e.g., gender), socioeconomic variables (e.g., income), and health risk factors (e.g., smoking).

Complemented by findings that online emotions are ‘contagious,’ it becomes clear that emotional processes on social media are potent. Controversial Facebook experiment aside, the concept that emotions spread through social media networks has received robust empirical support. Analysis of 3.5 million Twitter-like posts from China (on Weibo) 5653817859_3567ac7c8f_orevealed that joy spreads quickly through the network, but is outpaced by anger. In another study conducted on millions of Facebook users, positive posts by one user increased positive posts by that user’s friends at a factor of 1.75 (and decreased negative posts at a factor of 1.80). The factor for one user’s negative posts increasing friends’ negative posts was 1.29 (and 1.26 for decreasing friends’ positive posts). Additional evidence for emotional contagion online comes from Jonah Berger and Katherine Milkman, who analyzed the viral nature of 7,000 New York Times online articles. Content that angered readers was more likely to be shared than those that saddened readers.

It’s not as dire as it may seem: in that latter study, NYT content that resulted in a sense of awe was also shared widely. Negativity is viral – but so two is positive content – especially that which ‘wows’ us.

This isn’t to say that we are passive users of social media, subject to the emotional whims of others. Indeed, we use social media as a forum for emotion regulation: research by Benjamin K. Johnson and Silvia Knobloch-Westerwick shows that, when in feeling a bit down, individuals seek out downward social comparisons to other social media users that might be worse off (apparently in an effort to feel better about ourselves).

The emotional tenor of online communication reveals a great deal about who we are as people, as cultures, and as humankind. Not only do we influence others, we are also influenced by the emotions we share via social media. Social scientists are just beginning to understand the emotion processes that play out in social media – we are at the exciting forefront of the era of ‘big data’.

Photo credits: https://flic.kr/p/9gzy9 and https://flic.kr/p/9BBi5g

Is Facebook getting you down?

Kristen Lindquist

What better for a first blog post about emotions than a discussion of how the internet (may be) shaping our emotions? By now, you’ve probably heard about the Facebook study that was published in the Proceedings of the National Academy of Science that purportedly shifted people’s emotions by altering the content of their news feeds. If you haven’t seen it yet, this paper produced a lot of uproar (mostly on Facebook). People have variably decried it as unethical, not novel, or not evidence that people’s emotions were actually shifted. If you haven’t read it, here’s a précis: The authors selected a group of Facebook users and selectively reduced the amount of positive posts that were displayed in their newsfeed (e.g., removed posts like “I’m so happy I got the new job!” “We’re so glad to welcome our new baby!”) or selectively reduced the amount of negative posts that were displayed in their newsfeed (e.g., removed posts like “I really hate when some as*hole takes your parking spot at work!” “People disgust me!”). The authors then measured how much Facebook users in each condition posted positive or negative information themselves using an automated dictionary that codes words as positive or negative. What they found was that people who saw less positive stuff posted less positive stuff and more negative stuff, and people who saw less negative stuff posted less negative stuff and more positive stuff. Now if the authors really shifted people’s emotions then this is cool, but maybe not so surprising. It’s like saying that the people around you affect your mood. Think of that whiny co-worker who you want to avoid because life just seems a little more terrible when he’s around. It’s the same effect. Of course it’s made more interesting by the fact that it occurred on a massive scale and through—gasp—the internet!

But other questions remain about the study and its findings. I’ve been asked, so did they really change people’s emotions? Unfortunately, this question is quite the quagmire in the science of emotion. It turns out that there is no single measure in science that can tell you exactly what someone is feeling. You could hook them up to a heart rate monitor, measure the sweat on their skin, measure their respiration, put them in a brain scanner and you still couldn’t know exactly what they were feeling, beyond the fact that they were feeling something and maybe whether they were feeling generally activated v. sleepy or pleasant v. unpleasant. Thus, despite all our technology, the best way to know what someone is feeling is to ask them. Obviously hooking up Facebook users around the world to physiological recording devices was not an option for the authors, and they didn’t ask their unknowing participants how they felt either. So all we know from the experiment is that seeing fewer positive or negative posts changed the way that people used emotion words themselves. This could be the result of a change in participants’ perceptions of norms (i.e., “it’s not cool to humble brag on my Facebook page if my friends don’t do it”). Or it could be “emotional contagion” as the authors suggest—in the absence of positive information on Facebook feeds around the world, people’s days were just a bit grayer.

So in sum, why did this study get so much attention if it showed that—guess what—the people around us affect our emotions (or at least the nature of the emotion words we use in our Facebook posts)? It’s because people felt played. They felt taken advantage of. That Big Brother was toying with their emotions. Yet what people fail to realize is that their emotions are always being played. Every advertiser, politician, journalist, author, and salesperson in the world is constantly trying to play our emotions, for good or bad. Emotions are involved in every single moment of your waking life and are shifted by myriad unseen influences, not least of which is the Facebook newsfeed we (choose to) be glued to. At least Facebook technically told you it reserves the right to manipulate you (although see Eliza’s post for the broader ethical considerations at stake here when this happens in an experimental context). Not so much can be said for the used car salesman who relies on emotion-based tactics to get you to walk off the lot with a lemon. So if Facebook is getting you down, wait a minute and someone else will shift your mood.

Informed Consent and Debriefing Matters

(especially for emotion science)

Eliza Bliss-Moreau

I always thought that our first set of posts on Emotion News would be focused on the history of emotion science or a discussion about why the science of emotion matters for Regular Joe or Jane’s daily life. While attending a recent meeting, Kristen and I discussed the “Facebook-“Emotion-Manipulation” Debacle” that was still surging on the internet after more than a week in the news, though, and realized that we had different views about its importance for Emotion Science. So, we figured that we’d make our inaugural blog posts about it, hopefully setting the tone for our blog: emotion science matters for everyone; we don’t always agree on the how or why; and it’s important to have a forum to discuss these issues.

Many of the issues with the study on “emotion contagion” done by Facebook have been reviewed in detail elsewhere. In brief, they range from concerns that the conclusions about emotion spreading via social media are over blown to concerns that the manipulation of emotional information on people’s Facebook feeds was unethical. It would take pages to detail them all, so I’ve decided to focus on one aspect of the ethical complaint: were participants in the Facebook study properly informed of the experiment?

Facebook, and others, have argued that agreeing to their data use policy constitutes “informed consent”. Informed consent is the permission that scientists get from people to conduct and experiment with (or on) them (or the permission that clinicians get to provide medical treatment in a hospital or clinic setting). Rules vary a bit from institution-to-insitution and nation-to-nation but in general, informed consent procedures typically give people an idea of what they’re getting into—a general overview of the experimental study or procedure, some information about its purpose, and almost always the explicit option to end participation at any time without any consequence. Informed consent information is required to be clearly written and in common language. In cases where there might be concern about potential participants’ understanding the consent information, scientists are typically required to discuss all of the information with them.

To be clear, informed consent is not associated with all data. The panels of people that review the ethical implications of studies, called Institutional Review Boards, sometimes wave the requirement for informed consent when the impacts of the study are deemed to be minimal, where sensitive data will not be collected, or where the procedures are deemed to be comparable to things that people would normally do on a day-to-day basis, among other reasons. Further, as people in the digital age, we generate a lot of data—we click around on the internet, information about our salaries and demographics is recorded by the government, even information about our health ends up in digital archives. Scientists can typically use these data troves to test their hypotheses. Access to data sources is typically granted via an institution (either the college or university or agency at which the scientists works or the one that holds the data), but as an individual who has generated data points, you may never be informed about a specific hypothesis test being done on “your” data. The question is whether the Facebook study fits into any of these categories of research. Some argue yes, some argue no.

Informed consent is almost always required in cases where scientists are substantially manipulating some aspect of human experience. And that is what Facebook claims to have done (although the jury is out about whether or not their claims represent a substantial manipulation of experience). Given that, it is not clear that the data usage policy is sufficient to be an actual informed consent.

Users of Facebook agree to a data usage policy which basically says that Facebook can use the data you generate (posts, likes, comments, and so on) as they wish. Many users agreed to the data usage policy well before the actual experiment and it’s likely that many did not read it completely. While the latter issue is the problem of individuals, there is growing concern that many usage policies (called End User License Agreements or EULAs) are actually too long to read—like you would have to spend, literally, months reading them. If companies are creating EULAs that are literally too long to read knowing that people are not reading them, do they count as informed consent? Further, because data usage agreements may have been completed long before the experiment, we didn’t know when the experiment would take place and therefore had no ability to opt out (which could have been as easy as not opening Facebook during the experiment).

While we typically focus on the informed consent procedures that happen before people complete experiments, how people are informed about the experiments after their data has been collected also counts. In emotion science, it is sometimes, even often, the case that we don’t tell the whole truth and nothing but the truth during informed consent procedures. We might tell you that you’ll be listening to music and then complete a few questionnaires about who you are when we are actually using the music to induce a positive or negative mood and measuring whether your mood changes with the questions. We might even tell you a completely made up story about what you’re doing and why (called a “cover story”). These procedures are used because what you know about a study can actually bias how you respond. But, at the end of the study, we come clean in what is called a “debriefing”. We give you more information about the study and why you completed the procedures that you did and even why a cover story was required. Some debriefings also give participants an option to have their data removed from the archive once they know the true purpose of the study. Publishing a paper full of findings, like Facebook did, does not constitute a debriefing.

The primary success of the Facebook study may be that it has gotten scientists and the public talking about these issues. Since the dawn of the internet, we’ve been creating a lot of data. As the cost of storing that data falls, collection of and long term archiving of that data becomes possible. It’s time to think seriously about how we inform people about how their data is being used and what sorts of ethical principles will guide the design of large internet studies in the future. Especially, if we plan to manipulate emotions.