Is Facebook getting you down?

Kristen Lindquist

What better for a first blog post about emotions than a discussion of how the internet (may be) shaping our emotions? By now, you’ve probably heard about the Facebook study that was published in the Proceedings of the National Academy of Science that purportedly shifted people’s emotions by altering the content of their news feeds. If you haven’t seen it yet, this paper produced a lot of uproar (mostly on Facebook). People have variably decried it as unethical, not novel, or not evidence that people’s emotions were actually shifted. If you haven’t read it, here’s a précis: The authors selected a group of Facebook users and selectively reduced the amount of positive posts that were displayed in their newsfeed (e.g., removed posts like “I’m so happy I got the new job!” “We’re so glad to welcome our new baby!”) or selectively reduced the amount of negative posts that were displayed in their newsfeed (e.g., removed posts like “I really hate when some as*hole takes your parking spot at work!” “People disgust me!”). The authors then measured how much Facebook users in each condition posted positive or negative information themselves using an automated dictionary that codes words as positive or negative. What they found was that people who saw less positive stuff posted less positive stuff and more negative stuff, and people who saw less negative stuff posted less negative stuff and more positive stuff. Now if the authors really shifted people’s emotions then this is cool, but maybe not so surprising. It’s like saying that the people around you affect your mood. Think of that whiny co-worker who you want to avoid because life just seems a little more terrible when he’s around. It’s the same effect. Of course it’s made more interesting by the fact that it occurred on a massive scale and through—gasp—the internet!

But other questions remain about the study and its findings. I’ve been asked, so did they really change people’s emotions? Unfortunately, this question is quite the quagmire in the science of emotion. It turns out that there is no single measure in science that can tell you exactly what someone is feeling. You could hook them up to a heart rate monitor, measure the sweat on their skin, measure their respiration, put them in a brain scanner and you still couldn’t know exactly what they were feeling, beyond the fact that they were feeling something and maybe whether they were feeling generally activated v. sleepy or pleasant v. unpleasant. Thus, despite all our technology, the best way to know what someone is feeling is to ask them. Obviously hooking up Facebook users around the world to physiological recording devices was not an option for the authors, and they didn’t ask their unknowing participants how they felt either. So all we know from the experiment is that seeing fewer positive or negative posts changed the way that people used emotion words themselves. This could be the result of a change in participants’ perceptions of norms (i.e., “it’s not cool to humble brag on my Facebook page if my friends don’t do it”). Or it could be “emotional contagion” as the authors suggest—in the absence of positive information on Facebook feeds around the world, people’s days were just a bit grayer.

So in sum, why did this study get so much attention if it showed that—guess what—the people around us affect our emotions (or at least the nature of the emotion words we use in our Facebook posts)? It’s because people felt played. They felt taken advantage of. That Big Brother was toying with their emotions. Yet what people fail to realize is that their emotions are always being played. Every advertiser, politician, journalist, author, and salesperson in the world is constantly trying to play our emotions, for good or bad. Emotions are involved in every single moment of your waking life and are shifted by myriad unseen influences, not least of which is the Facebook newsfeed we (choose to) be glued to. At least Facebook technically told you it reserves the right to manipulate you (although see Eliza’s post for the broader ethical considerations at stake here when this happens in an experimental context). Not so much can be said for the used car salesman who relies on emotion-based tactics to get you to walk off the lot with a lemon. So if Facebook is getting you down, wait a minute and someone else will shift your mood.

Informed Consent and Debriefing Matters

(especially for emotion science)

Eliza Bliss-Moreau

I always thought that our first set of posts on Emotion News would be focused on the history of emotion science or a discussion about why the science of emotion matters for Regular Joe or Jane’s daily life. While attending a recent meeting, Kristen and I discussed the “Facebook-“Emotion-Manipulation” Debacle” that was still surging on the internet after more than a week in the news, though, and realized that we had different views about its importance for Emotion Science. So, we figured that we’d make our inaugural blog posts about it, hopefully setting the tone for our blog: emotion science matters for everyone; we don’t always agree on the how or why; and it’s important to have a forum to discuss these issues.

Many of the issues with the study on “emotion contagion” done by Facebook have been reviewed in detail elsewhere. In brief, they range from concerns that the conclusions about emotion spreading via social media are over blown to concerns that the manipulation of emotional information on people’s Facebook feeds was unethical. It would take pages to detail them all, so I’ve decided to focus on one aspect of the ethical complaint: were participants in the Facebook study properly informed of the experiment?

Facebook, and others, have argued that agreeing to their data use policy constitutes “informed consent”. Informed consent is the permission that scientists get from people to conduct and experiment with (or on) them (or the permission that clinicians get to provide medical treatment in a hospital or clinic setting). Rules vary a bit from institution-to-insitution and nation-to-nation but in general, informed consent procedures typically give people an idea of what they’re getting into—a general overview of the experimental study or procedure, some information about its purpose, and almost always the explicit option to end participation at any time without any consequence. Informed consent information is required to be clearly written and in common language. In cases where there might be concern about potential participants’ understanding the consent information, scientists are typically required to discuss all of the information with them.

To be clear, informed consent is not associated with all data. The panels of people that review the ethical implications of studies, called Institutional Review Boards, sometimes wave the requirement for informed consent when the impacts of the study are deemed to be minimal, where sensitive data will not be collected, or where the procedures are deemed to be comparable to things that people would normally do on a day-to-day basis, among other reasons. Further, as people in the digital age, we generate a lot of data—we click around on the internet, information about our salaries and demographics is recorded by the government, even information about our health ends up in digital archives. Scientists can typically use these data troves to test their hypotheses. Access to data sources is typically granted via an institution (either the college or university or agency at which the scientists works or the one that holds the data), but as an individual who has generated data points, you may never be informed about a specific hypothesis test being done on “your” data. The question is whether the Facebook study fits into any of these categories of research. Some argue yes, some argue no.

Informed consent is almost always required in cases where scientists are substantially manipulating some aspect of human experience. And that is what Facebook claims to have done (although the jury is out about whether or not their claims represent a substantial manipulation of experience). Given that, it is not clear that the data usage policy is sufficient to be an actual informed consent.

Users of Facebook agree to a data usage policy which basically says that Facebook can use the data you generate (posts, likes, comments, and so on) as they wish. Many users agreed to the data usage policy well before the actual experiment and it’s likely that many did not read it completely. While the latter issue is the problem of individuals, there is growing concern that many usage policies (called End User License Agreements or EULAs) are actually too long to read—like you would have to spend, literally, months reading them. If companies are creating EULAs that are literally too long to read knowing that people are not reading them, do they count as informed consent? Further, because data usage agreements may have been completed long before the experiment, we didn’t know when the experiment would take place and therefore had no ability to opt out (which could have been as easy as not opening Facebook during the experiment).

While we typically focus on the informed consent procedures that happen before people complete experiments, how people are informed about the experiments after their data has been collected also counts. In emotion science, it is sometimes, even often, the case that we don’t tell the whole truth and nothing but the truth during informed consent procedures. We might tell you that you’ll be listening to music and then complete a few questionnaires about who you are when we are actually using the music to induce a positive or negative mood and measuring whether your mood changes with the questions. We might even tell you a completely made up story about what you’re doing and why (called a “cover story”). These procedures are used because what you know about a study can actually bias how you respond. But, at the end of the study, we come clean in what is called a “debriefing”. We give you more information about the study and why you completed the procedures that you did and even why a cover story was required. Some debriefings also give participants an option to have their data removed from the archive once they know the true purpose of the study. Publishing a paper full of findings, like Facebook did, does not constitute a debriefing.

The primary success of the Facebook study may be that it has gotten scientists and the public talking about these issues. Since the dawn of the internet, we’ve been creating a lot of data. As the cost of storing that data falls, collection of and long term archiving of that data becomes possible. It’s time to think seriously about how we inform people about how their data is being used and what sorts of ethical principles will guide the design of large internet studies in the future. Especially, if we plan to manipulate emotions.