Driverless or autonomous cars will almost certainly be commonplace quite soon. Imagine you are sitting in such a car, approaching a tunnel on a single-lane mountain road. A child wanders into the middle of the road, blocking the entrance to the tunnel. How should such cars be programmed to react? Two options are: to keep going and kill the child; or to swerve aside into the tunnel wall and kill the driver.
The tunnel problem was invented by the philosopher Jason Millar. The question, of course, is not what the ‘user’ of the car should do. Nor is it any good suggesting an override function: there may be cases where there isn’t time to react. Millar’s own suggestion is based on an analogy with medical ethics. Those who purchase driverless cars should be permitted to choose their own ‘ethics package’. That suggestion itself rests on his view that there is no ‘right answer’ about what to do in the tunnel case, and that programming a particular programme into the car would ‘alienate’ users from their moral convictions.
Now Millar is quite clear that he doesn’t mean that anything goes here: he says it would be absurd to allow someone to use a program that swerves only to avoid males. But this raises the question for him why people’s own moral commitments are relevant only within a certain range. A more parsimonious and elegant, and I suspect popular, view would be that there is a right answer in the tunnel case, but we don’t know what it is.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.