Artificial intelligence (AI) has done a lot over the last few years, from identifying learning difficulties in children to detecting breast cancer risk and early signs of Alzheimer’s. It has even been involved in designing some pretty disconcerting motivational posters.
Now, a team of Harvard scientists are using AI to peek inside the monkey brain. In a new study, recently published in the journal Cell, the researchers attached a monkey called Ringo to an artificially intelligent algorithm to better understand the workings of the neurons and find out what images they like "best".
Here are the results.
Yes, they may look like surrealist portraits with a Francis Bacon-esque aesthetic. Or monsters out of a particularly grisly Doctor Who episode. But with some creative interpretation, several images can be identified as real-life people and animals. Anthony – a monkey who wears a distinctive red collar and lives in a cage opposite Ringo. Diane – a caretaker who dons blue scrubs and a white face mask when she is feeding Ringo and the other animals.
So, how exactly does this all work?
It hinges on an AI algorithm called XDREAM. The algorithm creates sets of images designed to stimulate a particular neuron in the monkey's brain and learns from previous attempts to produce images that are ever more stimulating.
"When given this tool, cells began to increase their firing rate beyond levels we have seen before, even with normal images pre-selected to elicit the highest firing rates," first author Carlos Ponce explained in a statement.
"What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world. We were seeing something that was more like the language cells use with each other."
XDREAM was trained on over a million images from a database called ImageNet. Then, in each experiment, a set of 40 images (gray and formless) were shown to alert macaques. As the monkeys watched, the algorithm identified the pictures that seemed to stimulate the neurons the most and then enhanced them according to the neurons' responses, creating a new generation of images.
Pictures depicting the head and torso of monkeys appeared to generate the best responses, whereas those of inanimate or rectilinear objects seemed to be the least stimulating.
The process of enhancement was repeated up to 250 times to produce 250 generations of pictures. To start with, the images were gray and indistinct but with time, they started to individualize. One gray, formless image metamorphosized into a face – or a pale oval containing two black dots above a black line – and a red splodge that resembles a collar. It could be Anthony. Then again, it might not.
As Ed Yong writing for The Atlantic points out, there’s a risk that it functions as a Rorschach-type test. I.e. the scientists see Anthony because they want to see Anthony.
A second algorithm was used to check that these images were more face-like than the others and seemed to confirm that the neurons responsible for these enhancements (by being stimulated) are indeed those that respond best to faces.
Still, while there were some exciting results (the "portraits" of Anthony and Diane being two), the vast majority of pictures – enhanced or otherwise – did not depict faces. Instead, they tended to be an abstract design of color and shape. This, scientists say, might just show how complicated the primate visual cortex is.
"We are seeing that the brain is analyzing the visual scene, and driven by experience, extracting information that is important to the individual over time," Ponce added. "The brain is adapting to its environment and encoding ecologically significant information in unpredictable ways."