Bioethics News

Could self-driving cars with no human intervention raise ethical difficulties?

According to the New York Times (15 May 2015), Google are experimenting  with a fleet of 100 electric self-driving cars that are designed to drive themselves with complete autonomy (now with 1 millon miles traveled). The user travels as a passenger, and cannot even act as a co-pilot; they only thing they can control is the “START” button and a red button called “E-STOP”, to stop the car in extreme cases. There are no other devices – no steering wheel, brake pedal or accelerator. The car is controlled using a Smartphone application which, similar to an SMS, receives an instruction for the destination chosen by the user. The author of the article states that the idea of self-driving cars is far from being a reality, but that its object is to address an as yet unresolved issue: “the car itself might need to be programmed with a basic code of ethics, which presents some interesting dilemmas”.

By way of example, the author cites an article from Australian magazine WIRED (29 June 2015), in which the writer, Jason Millar, raises some of the ethical issues presented by self-driving cars, given that these robot cars could be the cars of the future (owing to their ecological characteristics) and because, it seems, they could avoid accidents due to carelessness or human error.

Self-driving cars ethical possible dilema

Millar presents an example of a possible ethical dilemma, which he calls the “Tunnel Problem”, proposing the following: “You are travelling along a single-lane mountain road in an autonomous car that is fast approaching a narrow tunnel.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.