At first glance, this analysis seems to suggest that outdated views that criminals can be identified by physical attributes are not entirely wrong. However, it may not be the full story. It is interesting that two of the most relevant features are related to the lips, which are our most expressive facial features. ID photos such as the ones used in the study are required to have neutral facial expression, but it could be that the AI managed to find hidden emotions in those photos. These may be so minor that humans might have struggled to notice them.
It is difficult to resist the temptation to look at the sample photos displayed in the paper, which is yet to be peer reviewed. Indeed, a careful look reveals a slight smile in the photos of noncriminals – see for yourself. But only a few sample photos are available so we cannot generalise our conclusions to the whole database.
The power of affective computing
This would not be the first time that a computer was able to recognise human emotions. The so-called field of “affective computing” has been around for several years. It is argued that, if we are to comfortably live and interact with robots, these machines should be able to understand and appropriately react to human emotions. There is much work in the area, and the possibilities are vast.
For example, researchers have used facial analysis to spot struggling students in computer tutoring sessions. The AI was trained to recognise different levels of engagement and frustration, so that the system could know when the students were finding the work too easy or too difficult. This technology could be useful to improve the learning experience in online platforms.
AI has also been used to detect emotions based on the sound of our voice by a company called BeyondVerbal. They have produced software which analyses voice modulation and seeks specific patterns in the way people talk. The company claims to be able to correctly identify emotions with 80% accuracy. In the future, this type of technology might, for instance, help autistic individuals to identify emotions.
Sony is even trying to develop a robot able to form emotional bonds with people. There is not much information about how they intend to achieve that, or what exactly the robot will do. However, they mention that they seek to “integrate hardware and services to provide emotionally compelling experiences”.
An emotionally intelligent AI has several potential benefits, be it to give someone a companion or to help us performing certain tasks – ranging from criminal interrogation to talking therapy.
But there are also ethical problems and risks involved. Is it right to let a patient with dementia rely on an AI companion and believe it has an emotional life when it doesn’t? And can you convict a person based on an AI that classifies them as guilty? Clearly not. Instead, once a system like this is further improved and fully evaluated, a less harmful and potentially helpful use might be to trigger further checks on individuals considered “suspicious” by the AI.
So what should we expect from AI going forward? Subjective topics such as emotions and sentiment are still difficult for AI to learn, partly because the AI may not have access to enough good data to analyse them objectively. For instance, could AI ever understand sarcasm? A given sentence may be sarcastic when spoken in one context but not in another.
Yet the amount of data and processing power continues to grow. So, with a few exceptions, AI may well be able to match humans in recognising different types of emotions in the next few decades. But whether an AI could ever experience emotions is a controversial subject. Even if they could, there may certainly be emotions they could never experience – making it difficult to ever truly understand them.