Bioethics Blogs

Killer Robots, Human Responsibility, and a Reason to Hope

[Continuing coverage of the UN’s 2015 conference on killer robots. To see all posts in this series, click here.]

Things go wrong, with technology and with people. Case in point: this year, I arrived in Geneva on time after a three-leg flight, but last year’s trip was a surreal adventure. United’s hopelessly overworked agents didn’t inform me that my first destination airport was closed as I waited for the flight, then lied about the unavailability of alternative flights, all while attempting to work a dysfunctional computer system — followed by a plane change due to mechanical problems, and then another missed connection.

So yes, things go wrong, with technology and with people, and even more so with vast systems of people enmeshed with machines and operating close to some margin determined by the unstable equilibria of markets, military budgets, and deterrence. Sometimes, one man loses his mind and scores lose their lives; other times, one keeps his sanity and the world is saved from a peril it hardly knew.

On September 26, 1983, the Soviet infrared satellite surveillance system indicated an American missile launch; the computers had gone through all their “28 or 29 security levels” and it fell to Russian air-defense lieutenant colonel Stanislav Petrovto decide that it had to be a false alarm, given the small number of missiles the system was indicating. This incident occurred just three weeks after another military-technical-human screw-up had led to the destruction of Korean Air Lines flight 007 by a Soviet fighter, during one of the most tense years of the Cold War, and at a time when the Kremlin seriously feared an American first strike.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.