Our Futurisms colleague Charlie Rubin had a smart, short piece over on the Huffington Post a couple weeks ago called “We Need To Do More Than Just Point to Ethical Questions About Artificial Intelligence.” Responding to the recent (and much ballyhooed) “open letter” about artificial intelligence published by the Future of Life Institute, Professor Rubin writes:
One might think that such vagueness is just the result of a desire to draft a letter that a large number of people might be willing to sign on to. Yet in fact, the combination of gesturing towards what are usually called “important ethical issues,” while steadfastly putting off serious discussion of them, is pretty typical in our technology debates. We do not live in a time that gives much real thought to ethics, despite the many challenges you might think would call for it. We are hamstrung by a certain pervasive moral relativism, a sense that when you get right down to it, our “values” are purely subjective and, as such, really beyond any kind of rational discourse. Like “religion,” they are better left un-discussed in polite company….
No one doubts that the world is changing and changing rapidly. Organizations that want to work towards making change happen for the better will need to do much more than point piously at “important ethical questions.”
This is an excellent point. Raising ethical questions is a common rhetorical tactic. I can’t count how many bioethics talks I’ve heard over the years that just raise questions without attempting to answer them.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.