
August 24, 2017 |
The idea of autonomous cars has always raised a big question: in the event of a serious crash that involves life-and-death decisions, what should the vehicle do? Clearly, it’s possible to program cars to do as humans desire, but there isn’t necessarily a clear course of action to take in every situation.
That, however, hasn’t stopped German regulators from taking a stance on the issue. Reuters reports that autonomous-car software must be “programmed to avoid injury or death of people at all cost.”
Image: By Grendelkhan – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=47467048
MIT Technology Review
Tags: ai, autonomous, bioethics, computer, killing, neuroethics, trolley problem, utilitarianism, vehicle
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.