Bioethics Blogs

Hide your face?

A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.

Face interpretation

The most obvious ethical problem is that faces remain the same no matter whether you have committed a crime or not. Presumably actually becoming a terrorist does not suddenly warp the face: the detector ought to have gone off as soon as the person reached adulthood. A face also does not tell what side somebody is on: people who would be terrorists in one country might be law enforcement in another. I guess that what Faception actually does is look at facial expression, building on the post-911 attempts at detecting “malintent” – a generally underwhelming domain that seems to boil down to looking for nervous, stressed people.

That leads to a much more serious problem: the problem of false positives. Even a detector with the ability to detect real terrorists 100% of the time, and which only 1% of the time triggers on innocent people will tend to catch an enormous number of innocent people.

The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.