We’ve already learned today that Google is helping to make nuclear fusion a reality, but now there’s more news from the company – they’re teaching artificial intelligence (AI) to predict the future.
To clarify, we’re not talking about Minority Report here. Instead, Google’s Deepmind division – which conducts AI research – is giving its AI algorithms an imagination, so they can predict how a situation might play out. The research is published in these two papers.
“When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall,” Deepmind said in a blog post.
“If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to ‘imagine’ and reason about the future.”
Deepmind made the news recently after developing AlphaGo, an AI machine that defeated some of the world’s best players at the ancient Chinese board game Go.
They mentioned AlphaGo as being a good example of an AI “agent” that can plan for the future quite well. But it only operated in the confined rules of the game Go. Thus, Deepmind wants to be able to apply the lessons learned to real world examples too.
That has led to the development of I2As (imagination-augmented agents), which are designed with a neural network to extract information that might be useful for future decisions. They can adapt a number of imagined possibilities for a particular task and learn different strategies to conduct plans.
To test this out, Deepmind let the agents loose on a puzzle game called Sokoban and a spaceship navigation game. Both require forward planning, with levels procedurally generated so that the agents could not simply use trial-and-error. In fact, they could only try each level once.
Watch an I2A play Sokoban
“For both tasks, the imagination-augmented agents outperform the imagination-less baselines considerably,” said Deepmind. “[T]hey learn with less experience and are able to deal with the imperfections in modelling the environment.”
Deepmind said the next step would be to scale up the idea to other problems, and design agents that can use imaginations to plan for the future in a variety of other scenarios.
Okay Elon, you can start to get worried. Probably.