[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]
I mentioned in my first post in this series that last year’s meeting on Lethal Autonomous Weapons Systems was extraordinary for the UN body conducting it in that delegations actually showed up, made statements and paid attention. One thing that was lacking, though, was high-quality, on-topic expert presentations — other than those of my colleagues in the Campaign to Stop Killer Robots, of course. If Monday’s session on “technical issues” is any indication, that sad story will not be repeated this year.
|“Aggressive Maneuvers for Autonomous Quadrotor Flight”|
Berkeley computer science professor Stuart Russell, coauthor (with Peter Norvig of Google) of the leading textbookon artificial intelligence, scared the assembled diplomats out of their tailored pants with his account of where we are in the development of technology that could enable the creation of autonomous weapons. (You can see Professor Russell’s slides here.) Thanks to “deep learning” algorithms, the new wave of what used to be called artificial neural networks, “We have achieved human-level performance in face and object recognition with a thousand categories, and super-human performance in aircraft flight control.” Of course, human beings can recognize far more than a thousand categories of objects plus faces, but the kicker is that with thousand-frame-per second cameras, computers can do this with cycle times “in the millisecond range.”
|“embarrassingly slow, inaccurate, and ineffective”|
After showing a brief clip of Vijay Kumar’s dancing quadrotor micro-drones engaged in cooperative construction activities entirely scheduled by autonomous AI algorithms, Russell discussed what this implied for assassination robots.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.