Bioethics Blogs

Big Data researchers call for IRB review, based on shaky premises

Jacob Metcalf of the Data & Society Research Institute and Kate Crawford of Microsoft Research, MIT Center for Civic Media, and New York University Information Law Institute (I think those are three different things) want to subject Big Data research to IRB review, at least in universities. Their argument rests on shaky premises.

[Jacob Metcalf and Kate Crawford, “Where Are Human Subjects in Big Data Research? The Emerging Ethics Divide,” Big Data & Society 3, no. 1 (January–June 2016): 1–14, doi:10.1177/2053951716650211.]

Assumptions about assumptions

Metcalf and Crawford understand that the current Common Rule does not require IRB review of publicly available datasets. Claiming to be “historicizing extant research ethics norms and regulations” and drawing lessons “from the history and implementation of human-subjects research protections,” they proceed to invent a history of the relevant provisions.

They write,

US research regulations (both the current rules and proposed revisions) exempt projects that make use of already existing, publicly available datasets on the assumption that they pose only minimal risks to the human subjects they document. (1)

And

The Common Rule assumes that data which is already publicly available cannot cause any further harm to an individual. (3)

And

The criteria for human-subjects protections depend on an unstated assumption that we argue is fundamentally problematic: that the risk to research subjects depends on what kind of data is obtained and how it is obtained, not what is done with the data after it is obtained. This assumption is based on the idea that data which is public poses no new risks for human subjects, and this claim is threaded throughout the NPRM.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.