Bioethics News

What the experts think of malign superintelligence

Skynet is not keen on those pesky humans 

Will you live long enough to see and perhaps become a slave to super-intelligent artificial intelligence? Oxford philosophy professor Nick Bostrom has often made headlines with predictions that you might.

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he writes. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

His book Superintelligence: Paths, Dangers, Strategies, was a New York Times bestseller last year, endorsed by celebrities like Tesla boss Elon Musk and Bill Gates.

But what do experts in artificial intelligence think of the philosopher’s predictions?

In MIT Technology Review, Oren Etzioni, a professor of computer science at the University of Washington, surveyed members of the American Association for Artificial Intelligence. They were sceptical. About 25% thought that superintelligence would never happen and 92% thought that it was beyond the foreseeable horizon. Some of their comments were not flattering:

“Way, way, way more than 25 years. Centuries most likely. But not never.”

“We’re competing with millions of years’ evolution of the human brain. We can write single-purpose programs that can compete with humans, and sometimes excel, but the world is not neatly compartmentalized into single-problem questions.”

“Nick Bostrom is a professional scare monger. His Institute’s role is to find existential threats to humanity. He sees them everywhere. I am tempted to refer to him as the ‘Donald Trump’ of AI.”

This article is published by Michael Cook and BioEdge under a Creative Commons licence.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.