Elon Musk recently tweeted that we may be more at risk from artificial intelligence (AI) than the current nuclear threat by North Korea.
Clearly concerned with the increasing advancement in technology, Musk took to Twitter to voice his fears and deliver a warning to developers, and possible regulators of the industry.
"If you're not concerned about AI safety, you should be. Vastly more risk than North Korea," he wrote.
This comes at a time of increasing tension between North Korea and the USA, in which the threat of nuclear weapons is becoming more and more real with each day. Despite this, Musk believes our greatest threat lies currently in the virtually unregulated AI industry, and as such is now calling for improvements to be made.
He followed on from his previous tweet by stating "everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too."
Admittedly, AI technology is developing rapidly. OpenAI, a company led by Musk, recently succeeded in reaching a huge milestone in AI progress – an AI bot consistently beat professional human opponents in 1v1 scenarios in the popular video game Dota. This is particularly impressive, as the bot learned how to win solely by repeatedly playing against itself over a period of three months. With minimal human coaching, it was able to learn complex mechanics, alongside exploiting its opponent’s mistakes and adapting to random play styles – achievements far more advanced than the board games tested previously.
This occurred on August 12 at the Grand Finals of The International 7, the largest annual Dota tournament, with a staggering prize pool of nearly $25million and the victory of being watched by nearly 4.7million people.
Whether this game off was for entertainment purposes, or to show the power and speed of progression of current AI technology, we're not sure, but the bot certainly made an impression.
Whilst the warning appears worrying, don’t start running to find a cave to live in just yet, it isn’t the first time Musk has predicted danger from AIs. In 2014, he tweeted:
Clearly he hasn’t changed his mind since then, and is still fighting to get across the importance of the issue.
Perhaps with this demonstration, and the rate at which AIs are growing in intelligence, it really is time to begin regulating the industry and collaborating between developers to understand AIs better. After all, if they are actually more dangerous than nukes, we’re going to need all the help we can get.