Technology to make deepfakes is getting more realistic and sophisticated as time goes on. This can be used for harmless purposes – however, the harm that they could do in the wrong hands is very real. This is why tools to detect these fake faces are essential. Luckily, a new paper published on the preprint server arXiv showcases a method apparently able to detect AI-generated faces by gazing deep into their eyes.
Deepfakes generated by a generative adversary network (GAN) – two neural networks working against each other to produce a realistic image, one to create, the other to evaluate – are generally in a portrait setting, with eyes looking straight at the camera. The authors of the paper think that this may be due to the real images that the GAN is trained on. With real faces in this type of setting, the two eyes are reflecting the same light environment.
“The cornea is almost like a perfect semisphere and is very reflective,” explained lead author of the paper Professor Siwei Lyu in a statement. “The two eyes should have very similar reflective patterns because they’re seeing the same thing. It’s something that we typically don’t typically notice when we look at a face.”
However, the researchers noticed “striking” differences between the two eyes in faces generated by AI. Trained on faces generated by thispersondoesnotexist and Flickr Faces-HQ, their method maps out the face, then zooms in and examines the eyes, then the eyeballs, then the light reflected in each. Differences between the reflections such as shape and light intensity are then compared.

“Our experiments show that there is a clear separation between the distribution of the similarity scores of the real and GAN synthesized faces, which can be used as a quantitative feature to differentiate them,” write the authors in the paper. With the samples of portrait photos, the tool was 94 percent effective at telling fake and real faces apart.
They think that the differences in eye reflections could be down to the lack of physical and physiological restraints in GAN models, as well as the images essentially being an amalgamation of many different photos. However, the method did produce false positives in photos not in the portrait setting, or with a light source very close to the eyes. They also stress that it is possible to further manipulate deepfakes, editing in similar eye reflections.
This type of tool to help discern AI-generated faces could help weed out fake accounts trolling and spreading misinformation. “As the GAN-synthesized faces have passed the 'uncanny valley' and are challenging to distinguish from images of real human faces, they quickly become a new form of online disinformation. In particular, GAN-synthesized faces have been used as profile images for fake social media accounts to lure or deceive unaware users,” write the authors.
“There’s also the potential political impact,” elaborates Professor Lyu. “The fake video showing politicians saying something or doing something that they’re not supposed to do. That’s bad.”