The Future of Life Institute launched an open letter last week, calling for “research on how to make AI [Artificial Intelligence] systems robust and beneficial.” This follows warnings from a bevy of experts, including physicist Stephen Hawking and others (last May and December), and technology entrepreneur Elon Musk, who warned in October:
I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. … I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.
Coming from a high-tech entrepreneur like Musk, dire language like this deserves — and received — attention (not all supportive). Musk not only signed the open letter but immediately donated $10 million to the Future of Life Institute. The Institute was founded in March 2014 as a volunteer-run organization with some very high-profile advisors: two Nobel prizewinners (Saul Perlmutter and Frank Wilczek), some rich entrepreneurs (including Musk), a couple of celebrities (Morgan Freeman and Alan Alda) and a bunch of top-notch academics (including Hawking, George Church, Stuart Russell, Nick Bostrom and Francesca Rossi).
The letter has attracted thousands of signatories. Over 5,000 are listed on the website, including many notable AI researchers and other academics. There are over 50 from Google, 20 connected with Oxford University, 15 with Harvard, 15 with Berkeley, 13 with Stanford — you get the picture — and several associated with Singularity University (but not Ray Kurzweil, popularizer of the notion that “the singularity” — the moment when AI surpasses human intelligence — is near, and now a director of engineering at Google).
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.