This blog is about medical ethics, I understand that. But sometimes it’s useful to stand back and examine the assumptions of what constitutes “ethics” and “moral behavior.” Most of us would, I believe, consider moral and ethical decisions the domain of people rather than machines, but as computers and software that run on them become ever more powerful, some developments are worth keeping an eye on: autonomous machines that are expected to make moral / ethical decisions.
$7.5 million is a drop in the very large bucket that could make such a scenario possible, yet the very existence of the bucket should wake us up and think about, and sort out in our heads, what is “right” and “moral” and “ethical.”
There have been many and complex arguments taking positions at extremes such as “of course computers will be sentient (and soon!)” to “machines can never achieve awareness that humans do,” and every slice in between.
If you have already thought about what you mean by these words, excellent. If not, now’s a good time to wrap one’s head around this nontrivial topic. Can an autonomous robot make moral decisions? Public policy on it depends on thoughtful input.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.