
I always thought that our first set of posts on Emotion News would be focused on the history of emotion science or a discussion about why the science of emotion matters for Regular Joe or Jane’s daily life. While attending a recent meeting, Kristen and I discussed the “Facebook-“Emotion-Manipulation” Debacle” that was still surging on the internet after more than a week in the news, though, and realized that we had different views about its importance for Emotion Science. So, we figured that we’d make our inaugural blog posts about it, hopefully setting the tone for our blog: emotion science matters for everyone; we don’t always agree on the how or why; and it’s important to have a forum to discuss these issues.
Many of the issues with the study on “emotion contagion” done by Facebook have been reviewed in detail elsewhere. In brief, they range from concerns that the conclusions about emotion spreading via social media are over blown to concerns that the manipulation of emotional information on people’s Facebook feeds was unethical. It would take pages to detail them all, so I’ve decided to focus on one aspect of the ethical complaint: were participants in the Facebook study properly informed of the experiment?
Facebook, and others, have argued that agreeing to their data use policy constitutes “informed consent”. Informed consent is the permission that scientists get from people to conduct and experiment with (or on) them (or the permission that clinicians get to provide medical treatment in a hospital or clinic setting). Rules vary a bit from institution-to-insitution and nation-to-nation but in general, informed consent procedures typically give people an idea of what they’re getting into—a general overview of the experimental study or procedure, some information about its purpose, and almost always the explicit option to end participation at any time without any consequence. Informed consent information is required to be clearly written and in common language. In cases where there might be concern about potential participants’ understanding the consent information, scientists are typically required to discuss all of the information with them.
To be clear, informed consent is not associated with all data. The panels of people that review the ethical implications of studies, called Institutional Review Boards, sometimes wave the requirement for informed consent when the impacts of the study are deemed to be minimal, where sensitive data will not be collected, or where the procedures are deemed to be comparable to things that people would normally do on a day-to-day basis, among other reasons. Further, as people in the digital age, we generate a lot of data—we click around on the internet, information about our salaries and demographics is recorded by the government, even information about our health ends up in digital archives. Scientists can typically use these data troves to test their hypotheses. Access to data sources is typically granted via an institution (either the college or university or agency at which the scientists works or the one that holds the data), but as an individual who has generated data points, you may never be informed about a specific hypothesis test being done on “your” data. The question is whether the Facebook study fits into any of these categories of research. Some argue yes, some argue no.
Informed consent is almost always required in cases where scientists are substantially manipulating some aspect of human experience. And that is what Facebook claims to have done (although the jury is out about whether or not their claims represent a substantial manipulation of experience). Given that, it is not clear that the data usage policy is sufficient to be an actual informed consent.
Users of Facebook agree to a data usage policy which basically says that Facebook can use the data you generate (posts, likes, comments, and so on) as they wish. Many users agreed to the data usage policy well before the actual experiment and it’s likely that many did not read it completely. While the latter issue is the problem of individuals, there is growing concern that many usage policies (called End User License Agreements or EULAs) are actually too long to read—like you would have to spend, literally, months reading them. If companies are creating EULAs that are literally too long to read knowing that people are not reading them, do they count as informed consent? Further, because data usage agreements may have been completed long before the experiment, we didn’t know when the experiment would take place and therefore had no ability to opt out (which could have been as easy as not opening Facebook during the experiment).
While we typically focus on the informed consent procedures that happen before people complete experiments, how people are informed about the experiments after their data has been collected also counts. In emotion science, it is sometimes, even often, the case that we don’t tell the whole truth and nothing but the truth during informed consent procedures. We might tell you that you’ll be listening to music and then complete a few questionnaires about who you are when we are actually using the music to induce a positive or negative mood and measuring whether your mood changes with the questions. We might even tell you a completely made up story about what you’re doing and why (called a “cover story”). These procedures are used because what you know about a study can actually bias how you respond. But, at the end of the study, we come clean in what is called a “debriefing”. We give you more information about the study and why you completed the procedures that you did and even why a cover story was required. Some debriefings also give participants an option to have their data removed from the archive once they know the true purpose of the study. Publishing a paper full of findings, like Facebook did, does not constitute a debriefing.
The primary success of the Facebook study may be that it has gotten scientists and the public talking about these issues. Since the dawn of the internet, we’ve been creating a lot of data. As the cost of storing that data falls, collection of and long term archiving of that data becomes possible. It’s time to think seriously about how we inform people about how their data is being used and what sorts of ethical principles will guide the design of large internet studies in the future. Especially, if we plan to manipulate emotions.