Written by Darlei Dall’Agnol
I attended, recently, the course Drones, Robots and the Ethics of Armed Conflict in the 21st Century, at the Department for Continuing Education, Oxford University, which is, by the way, offering a wide range of interesting courses for 2015-6 (https://www.conted.ox.ac.uk/). Philosopher Alexander Leveringhaus, a Research Fellow at the Oxford Institute for Ethics, Law and Armed Conflict, spoke on “What, if anything, is wrong with Killer Robots?” and ex-military Wil Wilson, a former RAF Regiment Officer, who is now working as a consultant in Defence and Intelligence, was announced to talk on “Why should autonomous military machines act ethically?” changed his title, which I will comment on soon. The atmosphere of the course was very friendly and the discussions illuminating. In this post, I will simply reconstruct the main ideas presented by the main speakers and leave my impression in the end on this important issue.
In his presentation “What, if anything, is wrong with Killer Robots?,” (for those interested, a forthcoming paper), Alexander Leveringhaus started by trying to define a robot as an artificial device, embodied artificial intelligence, that can sense its environment and purposefully act on or in that environment. Despite the fact that this may not be a clear-cut definition, most military robots, for example, The Dragonrunner Robot, The Alpha Dog, The Predator Drone (MQ-1), Taranis, Iron Dome, Sentry Robot etc. fit this definition. He then argued that the main objection against using autonomous weapons in an armed conflict is not that there is an accountability gap between programming and operating these tools and what they will effectively do, but instead the real issue is whether the imposition of such risk can be justified.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.