HBO’s new show Westworld has been getting a lot of attention. As the AV Club pointed out, it was HBO’s highest-rated premiere since ‘the good True Detective’ (i.e., since season one). The first episode involved a robot with human-like intelligence going through a truly horrible day to cater to the whims of actual humans, and then having her memory erased so she could do it again and again.
Among other (surely more interesting) properties of the show, there is this: the show functions as an extended philosophical thought experiment. Through philosophical thought experiments, experimenters probe our imagination and our intuitions to reveal the things and the ways that we think about important philosophical issues. One’s reactions to Westworld are philosophically illuminating.
Consider this question: is phenomenal consciousness morally valuable? Don’t worry too much about what it means to say something is morally valuable. Does an entity’s possession of consciousness motivate you to treat it in certain ways – to care about it and whether things are going well for it, to worry about how it is doing, to avoid hurting it? One way to explore this question is to imagine how you would feel about and treat the entity if you discovered it lacked consciousness. If you found out it was just a robot, would your care for the thing begin to dissolve? It probably would, and a show like Westworld plays with this reaction. The common reaction of horror to the revelation that some poorly treated machine is actually conscious is predicated on the deeply held belief that consciousness is morally valuable.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.