Google’s pretty good when it comes to designing artificial intelligence. Its most famous neural network, DeepMind, is both able to “dream” and understand the benefits of betrayal. It’s also better than any living human at the infinitely complex game, Go.
As impressive as this is, Google is determined to show the world it’s not just a one-trick pony. At Google’s I/O 2017 conference last week, its CEO Sundar Pichai made some rather striking comments on AutoML, another neural network process that generates layer upon layer of complex code and algorithms to “learn” about its environment.
Normally, each of these layers – segments of an AI’s whole, essentially – have to be crafted by people, and it takes time. Google had the bright idea of getting the pre-existing AI to create its own layers of code, and as it turns out, it’s doing it a lot faster and more effectively than its human technicians ever could.
Google’s AI has become its own creator.
An accompanying blog post by the researchers working on the project compare the new AI to a child, with respect to the original AI’s parents.
“A controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” they write. Whatever the task, it is monitored by the controlling AI throughout, and the feedback is used by the AI to improve the “child”.
“We repeat this process thousands of times – generating new architectures, testing them, and giving that feedback to the controller to learn from.”
Google's AutoML system, explained. Google Developers via YouTube
The AutoML procedure has so far been applied to image recognition and language modeling. Using AI alone, the team have observed it creating programs that are on par with state-of-the-art models designed by the world’s foremost experts on machine learning.
Curiously, although the human-designed AIs and the AI-manufactured programs show plenty of coding similarities, there are some key differences. Small subroutines have cropped up in the latter’s creations, those that appear to have no apparent use to human coders.
The team go on to suggest that this act of self-coding could be, counterintuitively, placed in the hands of the layperson.
By starting off with a “raw” AI, one that’s able to construct code itself, anyone could input a few basic commands, as if they’re placing an “order” for a specific type of bespoke AI program. Then, with the click of a button, the AI could go and evolve all by itself, eventually turning into the software that the customer desires.
The future, it seems, is inexorably rushing towards us, and AI will play an enormous part in our lives sooner than we probably thought.