This AI Can Create Stunning Photorealistic Images. Until You Take A Closer Look


Dr. Alfredo Carpineti

Senior Staff Writer & Space Correspondent

clockOct 5 2018, 17:18 UTC

The images were created by a computer. And yeah there's something a little off about them. Andrew Brock et al.

Recent advances in the world of artificial intelligence (AI) have shown that we can create very sophisticated algorithms that can understand and that can learn. One of these approaches uses generative imaging models, where an AI is fed a huge amount of images and then is tasked to recreate some of those images.

This would be like people asking you to describe a dog after seeing lots of dogs. It is easy for us, but not for machines. A team has now trained a Generative Adversarial Network (GAN) on the largest scale yet attempted, and come up with some incredibly realistic images of animals and other items, but a closer inspection reveals that there’s something off with these photos. Their results can be read in the pre-print journal arXiv.


The GAN model uses the training process as a game between two AIs. The first one tries to recreate images base on a specific set. If the word is "dog" it will study a certain number of pictures of dogs and then come up with its own version. The second AI needs to guess if the images were real or fabricated by the other software. The goal is eventually to have the discriminator algorithm not be able to tell the difference between the real and the artificial images. 

Can you guess which image is the one created by AI? (It's the top left. Not bad, eh?) Andrew Brock et al

This project uses exactly the same approach as GAN – but on steroids. Researchers usually feed about 64 images per subject into the AI, but in this case they gave it 2,000. It’s no surprise that it is nicknamed BigGAN. And the training was very successful, with the algorithm being able to create its own images based on the material provided, as you can see above.

And the images are good. They are photorealistic and a passing glance would not reveal anything peculiar. But the devil and the limitation of AIs are in the details. These networks still don’t have the capability to create flawless images. They need to quickly discover and manipulate what the essence of the data is, and this requires simplifications.

Some of the images have dream-like features, some are almost Lynchian (and some are nightmare fuel, sorry). But look closer, can you see what’s wrong exactly?

There are definitely too many legs going on here...everywhere. Andrew Brock et al

However, when it gets it right, it's amazing. This work truly shows how much progress has been made in this field. Algorithms are learning about what the things they are seeing actually are. 

[H/T: New Scientist]