Bioethics Blogs

Ethics of Artificial Intelligence

The May 28, 2015 issue of Nature was largely devoted to artificial intelligence.  Of note to readers of this blog was a short set of comments from four contributors regarding the ethics of AI.  While none of the writers thought that “robots are (or will be) people, too,” so to speak, they will be increasingly complex machines that pose issue like:

  • Creating a need for procedures and understandings to coexist with robots, who will “complement human beings, not supplant them.”  Robots will likely always have “significant limitations” relative to humans, and “will need to learn when to ask for help.”  Gee.
  • Justice–Ensuring that the benefits of robotics and AI are generally available to people, not just to the rich elite.  For example, sophisticated pattern recognition systems may become able to make medical diagnoses better than doctors can.  But the processes will be complex, not understood by clinicians, and much less intuitive than, say, iterative Bayesian inferences.  The resulting mysterious black box will make the outputs harder to communicate, and, indeed, to accept.
  • Communication in forming policy—science fiction and AI enthusiasm greatly overstates what can be done, it is argued.  The public does not understand the technical issues, and the research community is either uninterested in educating the public or despairs of the value of trying.  Researchers should be more engaged.
  • Lethal Autonomous Weapons Systems (LAWS)—a huge near-term concern.  While many countries at least say they have sworn off developing such things, all that they would need is the combination of several currently-existing capabilities.   It’s all too likely that a LAWS system would kill indiscriminately, and be (over)used in policing and not just warfare.  LAWS could search and destroy but would not be able to follow the Geneva Convention.  In the view of Stuart Russell of the University of California,

The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.