Bioethics Blogs

Big Data, Commercial Research, and the Protection of Subjects

by Elisa A. Hurley, PhD, Executive Director

Much has been written in the past few months—pro and con—about the results of the Facebook emotional contagion study published in June in the Proceedings of the National Academy of Sciences. The study manipulated the News Feeds of 700,000 unknowing Facebook users for a week in January 2012 by adjusting Facebook’s existing algorithm to over-select for either more positive or more negative language in posts. At the end of the week, the results showed that these users were more likely to follow the trend of their manipulated feed, that is, to use more positive or negative language in their own posts, respectively, based on their study grouping. Additionally, the study revealed that lowering the emotional content of posts overall caused users with affected News Feeds to post fewer words in their own statuses.

The public reaction to the revelation of the study in June was swift, loud, and dramatic. I myself was surprised by the uproar and still am not sure what to make of it.

Those who have written about the study in scholarly and popular media have voiced differing opinions about whether adequate informed consent for the study was provided via Facebook’s Terms of Service, as well as whether informed consent was even needed. Further debate has centered on whether the study required IRB review. And still other commentary has zeroed in on the merits of the research itself. As James Grimmelmann, a law professor from the University of Maryland said (quoted in The Atlantic, June 2014):

[The Facebook study] failed [to meet certain minimum standards]…for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.