Bioethics Blogs

Web Roundup: Data, Safety, and Bias by Lily Shapiro

Many people probably saw the news that Facebook allegedly privileges left-leaning stories in its trending news section, a story broken by Gizmodo at the beginning of this month. The BBC builds on this report to explore how what we see online (and the various ways in which this get tailored more and more specifically to us) affects our behavior. “[I]t is worth remembering that the designers of the technology we use have different goals to our own – and that, whether our intercessor is an algorithm or an editor, navigating it successfully means losing the pretense that there’s any escape from human bias.”

In a long article in The Verge, authors Catherine Buni and Soraya Chemaly detail the hidden world of content moderation on the Internet. They outline a history of moderation, and the levels of seriousness with which which various content hosts approach moderation (Pinterest versus Reddit versus Facebook, etc), raising questions along the way about the role of moderation in politics (when does the newsworthiness of a video depicting violence outweigh general guidelines prohibiting violence, what role do moderators play, wittingly or no, in social and political movements), free speech and the legal implications and histories of moderation, the unpaid and unrecognized labor that users themselves do to moderate content, and the undervaluing and off-shoring of the very grueling and mentally taxing work of moderation. The Secret Rules of The Internet

The Guardian has an article about the secrecy of research about online harassment and bullying, in which the author argues that Victorian social movements for food safety can teach us something about how to make the Internet a safer place for everyone, while acknowledging the that, “The underlying causes of online harassment can’t be solved by detecting and banning a few toxic commenters.”

In fact, a great number of recent articles have worked to illuminate the fact that data itself is, of course, not unbiased, and that decisions made based upon big data frequently end up further entrenching historical and systemic inequalities.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.