AI Could Become Bigger Threat Than Nuclear Weapons, Warns Ex-Google CEO

"We’re not ready for the negotiations we need," Schmidt argues.


Tom Hale

Tom is a writer in London with a Master's degree in Journalism whose editorial work covers anything from health and the environment to technology and archaeology.

Senior Journalist

Autonomous AI weapons on a futurist battlefield lined with human bones.
Autonomous AI weapons are starting to emerge on battlefields, but the world has not agreed how to regulate them. Image credit: Design Projects/

Former Google CEO Eric Schmidt put forward a bold warning that the unmitigated advance of artificial intelligence (AI) could snowball into a global conflict of mutually ensured destruction, not too dissimilar from the potential impact of the nuclear weapons arms race that exploded in the Cold War. 

Speaking at the Aspen Security Forum on July 22, Schmidt argues that the global powers are simply not ready to negotiate about how to manage the growing advance of AI. In turn, this could further weaken trust between two jostling superpowers, such as the US and China, and even slip into conflict.


Citing the power of information technology, he goes on to talk about how geopolitical powers like China and the US need to start having a conversation about AI regulation and establish treaties on the technology. 

However, he says it’s currently unclear how such a negotiation would unfold since both diplomatic parties would need to bring individuals with in-depth technical knowledge of AI and its potential future. 

“Let’s say we want to have a chat with China on some kind of treaty around AI surprises. Very reasonable. How would we do it? Who in the US government would work with us? And it’s even worse on the Chinese side. Who do we call? […] We’re not ready for the negotiations we need," Schmidt argues.

With geopolitical paranoia mounting across the world, he suggests that this is a similar situation to the nuclear arms race of the Cold War – but with worryingly less oversight than before. 


“In the 50s and 60s, we eventually worked out a world where there was a ‘no surprise’ rule about nuclear tests and eventually they were banned. When somebody launches a missile, for testing or whatever, they notify everyone. Everyone then uses their missile defense system to watch to train the systems,” he explained.

“It’s an example of a balance of trust or lack of trust: it’s a ‘no surprises’ rule. I’m very concerned that the US view of China as corrupt or Communist or whatever, and the Chinese view of America as failing will allow people to say ‘Oh my god, they’re up to something,’ and then begin some kind of conundrum. Begin some kind of thing where, because you’re arming or getting ready, you then trigger the other side,” said Schmidt. 

It’s unclear what exactly Schmidt is alluding to here, but he’s leaning on the premise that many global powers are in the dark about what their potential adversaries are up to, at least when it comes to AI. 

If one power, rightly or wrongly, started to suspect their rival was developing AI autonomous weapons, then we could see how tension would escalate and they may feel the need to squire such weapons. This is all just a thought experience for now, but Schmidt argues that it’s something the world urgently needs to consider before it’s too late. 


Schmidt is not alone with his warnings. Even Elon Musk – who generally opposes government intervention, unless you include subsidies for his own businesses – has also been a prominent voice calling for greater regulation of AI.

“We don’t have anyone working on that and yet AI is that powerful,” Schmidt warned. 


  • tag
  • future,

  • AI