Researchers at MIT have launched Moral Machine, a web project to help gauge human perspectives on “moral decisions made by machine intelligence.” The project comes in the wake of a new Science study regarding the complicated tangle of ethics and driverless cars, where the classic ‘trolley problem’ has been scaled up for new technology. Scientific American, weighing in, writes that real autonomy for new vehicles hinges not on manufacturer issues but on the moral and ethical dilemmas inherent in the new technology. Consumer demand is high and climbing. Mainstream discussions, however, continue to black box the ethical and moral within larger questions about safety systems. The Atlantic traces the driverless car back to the 1920s, where desire was driven by “the promise of improved safety.” Similarly, Volvo’s ongoing Future of Driving survey, while heavy on questions of safety and trust, makes no mention of whether or not driverless vehicles have ethics or ought to be moral. Today’s news, that BMW has secured partnerships with Mobileyes and Intel, ensures that the debates around autonomous vehicles are sure to intensify.
MIT Technology Review has written about Kevin Esvelt’s campaign to regulate gene drives in order to avoid “doomsday” outcomes. Esvelt’s vision for a safe gene drive is distinctly caught up in moral projects. A safe gene drive–one built around transparency and community input–is “a way to rectify what [Esvelt] considers a larger failing of the universe, which is that evolution itself “has no moral compass.”…Gene drives, by giving humankind the ability to fine-tune the battle for survival, could make the world a more just place.”
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.