Bioethics Blogs

Killer Robots: How could a ban be verified?

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

Here’s my latest dispatch from the second major diplomatic conference on Lethal Autonomous Weapons Systems, or “killer robots” as the less pretentious know them. (A UN employee, for whom important-sounding meetings are daily background noise, approached me in the cafeteria to ask where she could get a “Stop Killer Robots” bumper sticker like the one I had on my computer, and said she’d have paid no attention to the goings-on if that phrase hadn’t caught her eye.) The conference continued yesterday with what those who make a living out of attending such proceedings like to describe as “the hard work.”

Wishful thinking on Strategy

Expert presentations in the morning session centered on the reasons why militaries are interested in autonomous systems in general and autonomous weapons systems in particular. As Heather Roff of the International Committee for Robot Arms Control (ICRAC) put it, this is not just a matter of assisting or replacing personnel and reducing their exposure to danger and stress; militaries are also pursuing these systems as a matter of “strategic, operational, and tactical advantage.”

Roff traced the origin of the current generation of “precision-guided” weapons to the doctrine of “AirLand Battle” developed by the United States in the 1970s, responding then to perceived Soviet conventional superiority on the European “central front” of the Cold War. Similarly, Roff connected the U.S. thrust toward autonomous weapons today with the doctrine of “AirSea Battle,” responding to the perceived “Anti-Access/Area Denial” capabilities of China (and others).

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.