Before I start blogging the kickoff of this week’s United Nations meeting on killer robots, a little background is called for, both about the issue and my views on it.
I have worked on this issue in different capacities for many years now. (In fact, I proposed a ban on autonomous weapons as early as 1988, and again in 2002 and 2004.) In the present context, the first thing I want to say is about the Obama administration’s 2012 policy directive on Autonomy in Weapon Systems. It was not so much a decision made by the military as a decision made for the military after long internal resistance and at least a decade of debate within the U.S. Department of Defense. You may have heard that the directive imposed a moratorium on killer robots. It did not. Rather, as I explained in 2013 in the Bulletin of the Atomic Scientists, it “establishes a framework for managing legal, ethical, and technical concerns, and signals to developers and vendors that the Pentagon is serious about autonomous weapons.” As a Defense Department spokesman told me directly, the directive “is not a moratorium on anything.” It’s a full-speed-ahead policy.
|What counts as “semi-autonomous”?|
Top: Artist’s conception of Lockheed Martin’s
planned Long Range Anti-Ship Missile in flight.
Bottom: The Obama administration would
define the original T-800 Terminator as
The story of how so many people misinterpreted or were misled by the directive is complicated, and I won’t get into details right now, but basically the policy was rather cleverly constructed by strong proponents of autonomous weapons to deflect concerns about actual emerging (and some existing) weaponry by suggesting that the real issue is futuristic machines that independently “select and engage” targets of their own choosing.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.