I wonder how far Ari and [Edward] Feser would be willing to concede that the AI project might get someday, notwithstanding the faulty theoretical arguments sometimes made on its behalf…. Set aside questions of consciousness and internal states; how good will these machines get at mimicking consciousness, intelligence, humanness?
Allow me to come at this question by looking instead the big-picture view you explicitly asked me to avoid — and forgive me, readers, for approaching this rather informally. What follows is in some sense a brief update on my thinking on questions I first explored in my long 2009 essay on AI.
The big question can be put this way: Can the mind be replicated, at least to a degree that will satisfy any reasonable person that we have mastered the principles that make it work and can control the same? A comparison AI proponents often bring up is that we’ve recreated flying without replicating the bird — and in the process figured out how to do it much faster than birds. This point is useful for focusing AI discussions on the practical. But unlike many of those who make this comparison, I think most educated folk would recognize that the large majority of what makes the mind the mind has yet to be mastered and magnified in the way that flying has, even if many of its defining functions have been.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.