AI Creates Creepy Faces Of Fake People Using Photos Of Celebrities

Fake faces made from real celebrities. NVIDIA via YouTube

In today’s episode of “Jesus Christ, the future terrifies me,” computer scientists have developed an artificial intelligence (AI) network that's able to collect photographs of celebrities and then churn out a bunch of high-resolution images of convincing but fake human faces. Just like the best AI inventions always seem to be, it’s both creepy and frighteningly impressive.

The system is a new project by NVIDIA, a US tech company who develop graphics processing units for the gaming industry. It's a generative adversarial network (GAN), a computer system built with two neural networks to mirror the way a biological brain gathers and learns information, competing with each other. Just like a human learning a skill, they effectively play against each other, watching each other's moves, learning from the other’s technique, and refining their own art.

Yann LeCun, previous director of Facebook's AI Research, who was not involved in the study, previously described GAN as “the most interesting idea in the last 10 years in ML, in my opinion.”

In their study published in the pre-print journal arXiv, the authors explain: “We describe a new training methodology for generative adversarial networks (GAN). The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses."

“This," they said, "both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality.” The images are still not perfect, however, until recently GAN-generated images would have been a lot more easily discernible. 

Since NVIDIA is a tech company interested in graphics technology, you can imagine this system being applied to computer games. For example, it could mean never seeing the same background character in a game ever again.

However, in the era of “fake news” and ever-blurring line between the real and the fake, it could be used for more strange or sinister means. In July, researchers at the University of Washington revealed how it's possible to create fake video clips of Barack Obama just by using audio from other speeches. Pairing these types of technologies together could generate a very convincing person, who has actually never existed. Which is, of course, terrifying.

You can see the results for yourself in the video below. Don’t have nightmares.

Comments

If you liked this story, you'll love these

This website uses cookies

This website uses cookies to improve user experience. By continuing to use our website you consent to all cookies in accordance with our cookie policy.