Bioethics Blogs

What the Present Debate About Autonomous Weapons is Getting Wrong

Author: Michael Robillard

Many people are deeply worried about the prospect of autonomous weapons systems (AWS). Many of these worries are merely contingent, having to do with issues like unchecked proliferation or potential state abuse. Several philosophers, however, have advanced a stronger claim, arguing that there is, in principle, something morally wrong with the use of AWS independent of these more pragmatic concerns. Some have argued, explicitly or tacitly, that the use of AWS is inherently morally problematic in virtue of a so-called ‘responsibility gap’ that their use necessarily entails.

We can summarise this thesis as follows:

  1. In order to wage war ethically, we must be able to justly hold someone morally responsible for the harms caused in war.
  2. Neither the programmers of an AWS nor its military implementers could justly be held morally responsible for the battlefield harms caused by AWS.
  3. We could not, as a matter of conceptual possibility, hold an AWS itself morally responsible for its actions, including its actions that cause harms in war.
  4. Hence, a morally problematic ‘gap’ in moral responsibility is created, thereby making it impermissible to wage war through the use of AWS.

This thesis is mistaken. This is so for the simple reason that, at the end of the day, the AWS is an agent in the morally relevant sense or it isn’t.

If it isn’t, then premise 2 is either false and moral responsibility falls on the persons within the causal chain to the extent that they knew or should have known the harm they were contributing to and the degree to which they could have done otherwise, or premise 2 is true but vacuous because the harm was a result of a genuine accident.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.