By Shray Ambe
This post was written as part of a class assignment from students who took a neuroethics course with Dr. Rommelfanger in Paris of Summer 2016.
My name is Shray Ambe and I am a rising senior at Emory University. I am a Neuroscience and Behavioral Biology major who is pursuing a career in the medical field. Outside of the classroom, I am involved in organizing the booth for Emory’s Center for The Study of Human Health at the Atlanta Science Festival Expo every year and also enjoy volunteering at the Emory Autism Center and the Radiology Department at Emory University Hospital.
At the 2016 Neuroethics Network in Paris, France, bioethicist and philosopher John Harris gave a lecture titled “How Smart Do We Want Machines to Be?” During his lecture, Harris discussed the potential impacts of artificial intelligence (AI) and stated “it doesn’t matter how smart they are; obviously the smarter the better.” But is smarter AI really “obviously” better?
Renowned American inventor Ray Kurzweil has described the use of AI as the beginning of a “beautiful new era” in which machines will have the insight and patience to solve outstanding problems of nanotechnology and spaceflight, improve the human condition, and allow us to upload our consciousness into an immortal digital form, thus spreading intelligence throughout the cosmos. Kurzweil’s views on AI extoll the virtues of such technology and its potential to enhance the human race with its endless possibilities. However, his views also raise concerns about how such technology can not only be detrimental to the human condition, but also put its very existence at risk.
Nick Bostrom, a philosopher at the University of Oxford, is the author of Superintelligence, which illustrates his concerns with using smarter AI.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.