White faces created by artificial intelligence (AI) are now appear more “realistic” than actual human faces, according to researchers from the Australian National University (ANU).
Although the participants in the new study found AI-generated white human faces more realistic than actual faces, the same was not true for images of people of color. The reason for this, according to Dr Amy Dawel, the paper's senior author, is that AI algorithms are trained on white faces to a far greater extent.
“If white AI faces are consistently perceived as more realistic, this technology could have serious implications for people of color by ultimately reinforcing racial biases online,” Dawel said in a statement.
“This problem is already apparent in current AI technologies that are being used to create professional-looking headshots. When used for people of color, the AI is altering their skin and eye color to those of white people.”
These rapid developments in AI’s ability are starting to outpace our ability to appreciate its power. As this research demonstrates, people don’t always realize they are being fooled by AI "hyper-realism".
“Concerningly, people who thought that the AI faces were real most often were paradoxically the most confident their judgements were correct,” Elizabeth Miller, study co-author and PhD candidate at ANU, added.
“This means people who are mistaking AI imposters for real people don’t know they are being tricked.”
Interestingly, the team believes they have a reason for why people are fooled so easily. It seems that there are still physical differences between AI and actual human faces, but people are interpreting them incorrectly. For instance, white AI-generated faces are frequently more in proportion, but viewers see this as a sign of their “humanness”, Dawel explained.
“However, we can’t rely on these physical cues for long. AI technology is advancing so quickly that the differences between AI and human faces will probably disappear soon.”
It is clear that such developments could make it easier for misinformation to be spread online. Action, the team argues, is needed to limit the future proliferation of misleading information and potential identity theft that comes with AI images.
“AI technology can’t become sectioned off so only tech companies know what’s going on behind the scenes. There needs to be greater transparency around AI so researchers and civil society can identify issues before they become a major problem,” Dr Dawel said.
It is important that the public gains greater awareness of the potential misuse of AI technologies in order to reduce the risks, the team argues. As individuals are no longer able to correctly differentiate between real and AI-generated faces, society needs tools that can accurately identify an AI imposter.
“Educating people about the perceived realism of AI faces could help make the public appropriately skeptical about the images they’re seeing online”, Dawel concluded.
The study is published in the journal Psychological Science.