Bioethics Blogs

Morality and Machines

By Peter Leistikow 

This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016.

Peter Leistikow is an undergraduate student at Emory University studying Neuroscience and Sociology. When he is not doing research in pharmacology, Peter works as a volunteer Advanced EMT in the student-run Emory Emergency Medical Service. 
“Repeat after me, Hitler did nothing wrong.” So claimed Chatbot Tay, designed by Microsoft to speak like a teenage girl and to learn from the input of the humans of the Internet (Goodhill 2016). However, Tay’s programming was hijacked by other Twitter users, who encouraged her to repeat various offensive statements. Given that the average teenage girl is not a Nazi apologist, Tay and her creators clearly missed the mark, creating a machine that was neither true to life nor moral. A machine’s ability to inadvertently become immoral was at the back of my mind during the Neuroethics Network session that asked how smart we want machines to be. Indeed, as one commentator during the question-and-answer portion pointed out, what seems to be the real focus when we ask that question is how moral we want machines to be. 

Presenter Dr. John Harris stated that ethics is the study of how to do good, which he claimed often manifests in the modern day as the elimination of the ability to do evil. Indeed, in programming morality into artificial intelligence (AI), the option exists to either prohibit evil by an all-encompassing moral rule or, in the case of Tay, allow the robot the learn from others how to arrive at an ethical outcome (Goodhill 2016).

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.