Bioethics News

Brace yourself, humanity!

With rapid developments in artificial intelligence technology, academics and industry leaders are warning of the existential threat posed by autonomous AI systems. 

Cosmologist Stephen Hawking, Tesla CEO Elon Musk and Microsoft CEO Bill Gates have all recently cautioned against developing AI entities that have ‘interests that conflict with that of homo sapiens’.

At the 2015 Zeitgeist conference in London last week, Hawking warned that “Computers will overtake humans with AI at some within the next 100 years.” “When that happens, we need to make sure the computers have goals aligned with ours”, he said.

“Our future is a race between the growing power of technology and the wisdom with which we use it”, he added.

Elon Musk fears that the development of artificial intelligence, or AI, may be the biggest existential threat humanity faces. And in a Reddit Q and A session earlier this year, Bill Gates said he was “in the camp that is concerned about super intelligence”.

For these thinkers, it is crucial that we have adequate regulatory oversight of artificial intelligence companies. Hawking has called for increased transparency from artificial intelligence development firms, so that we can ensure that AI never grows beyond humanity’s control.

Many tech leaders are sceptical about the possibility of developing autonomous AI machines.

A lead article in the Economist this month assayed the topic of AI, the author arguing that “even if the prospect of what Mr Hawking calls “full” AI is still distant, it is prudent for societies to plan for how to cope.”

That edition of the Economist also featured a useful article summarising the state of the art in AI research.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.