I can read minds, and so can you. Unfortunately, we are both pretty bad at it. To be fair, I am not talking about magical, mythical, science fiction capabilities. Instead, I am referring to the low-tech, primordial practice of observing emotional expressions on faces. Most people are very proficient at detecting cardinal, apparent emotions (e.g., disgust, fear, and happiness), but we struggle when it comes to more subtle or covert expressions.
In the 1970s, psychologist Paul Ekman popularized the term “microexpressions” to explain these evasive displays of emotion. Throughout the past 40 years, his research team has developed training protocols that use annotated headshots to highlight subtle variations in the eyes and mouth. One can purchase these training programs online and allegedly improve everything from lie detection skills to empathy. But even if these protocols were substantially effective in controlled laboratory settings, their actual utility seems highly limited—we do not observe emotional expressions through isolated snapshots. So despite Ekman’s attempts, our mind-reading deficiencies persist.
Enter machine learning. Over the past month, a slew of articles have discussed machine-learning techniques that can detect human emotions. Unlike Ekman’s system, these processes do not expect individuals to extract minor facial movements from highly complex environments. Rather, these methods train computers to recognize externally visible facial signals that we often fail to perceive on our own. From cameras in billboards that discern viewers’ expressions, to machines that reveal subtle variations between healthy and depressed patients’ smiles, to algorithms that can decipher concealed emotions, technology is starting to connect the dots for us.
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.