May 19, 2016
(MIT Technology Review) – The possibility that a malevolent artificial intelligence might pose a serious threat to humankind has become a hotly debated issue. Various high profile individuals from the physicist Stephen Hawking to the tech entrepreneur Elon Musk have warned of the danger. Which is why the field of artificial intelligence safety is emerging as an important discipline. Computer scientists have begun to analyze the unintended consequences of poorly designed AI systems, of AI systems created with faulty ethical frameworks or ones that do not share human values.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.