It’s pretty clear that artificial intelligence (AI) will, in many ways, augment our lives in ways that could be transformational and revolutionary; from healthcare to emotional understanding, the benefits are clear. That doesn’t mean the risks aren’t worth considering either, and a new report raises the specter that computer systems could trigger nuclear war – but perhaps not in the way you might think.
The RAND Corporation, a US-based nonprofit think tank dealing with issues of policy and security, gathered AI and nuclear strategy experts together and got them to chat about the state of nuclear weapons systems in 2040. The resulting report and accompanying blog post suggest that, by that date, three scenarios are possible, with the first being the undermining of global nuclear security.
We’re not talking killer robots of movies and Elon Musk’s nightmares here, though. Even in their infancy, simple AIs, when given straightforward tasks, behave unpredictably, so we’re not exactly about to hand over the controls to our nuclear deterrents.
As the post explains, “it's how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.”
The idea of computer systems making errors of judgment are alluded to; the famous example of the Soviet satellite glitch in 1983 that almost triggered nuclear war – ultimately averted by the calm, steady hand of a USSR military officer – is used at the start of the blog post. That, however, isn’t really what the report focuses on either.
In fact, their experts are somewhat concerned that the rush to develop increasingly advanced, militaristic AI systems has started a new form of arms race, which could upset the global balance of power.
Mutually assured destruction (MAD) – if you destroy me, I’ll destroy you – meant that, overall, opposing sides in the Cold War weren’t willing to risk an attack and risk their own self-preservation. Perversely, there was a power balance that stopped anyone from annihilating the other.
This new report suggests that the development of increasingly omniscient AIs could make adversaries unprecedentedly anxious.
Say one country or alliance develops an AI that is able to monitor and detect threats all over the world. Regardless of how perfect this AI actually is at its role, this could make an opponent of this alliance uneasy at the mere prospect of such an advanced AI system existing.
They may think that, if they hesitate in the grand scheme of things, they’ll lose out – and in turn, they may get “itchier trigger fingers”. It may even encourage the use of a pre-emptive strike in order to stop the AI-dominating competition from inexorably upsetting the global balance.
“Autonomous systems don’t need to kill people to undermine stability and make catastrophic war more likely,” Edward Geist, an associate policy researcher for RAND, summarized in the blog post.
This, however, is just one possibility. The report also explains that AI could be a stabilizing influence instead of a dangerous one. If AI remains cold, logical, and rational, it may be able to track threats and warn the world if things are getting unstable. At the same time, it could prevent anger-riddled humans from making colossal mistakes.
In any case, most experts concluded that, by 2040, AI won’t be advanced enough to have much of an impact on nuclear security anyway. At the same time, officials are unlikely to use it in this way if it's still able to be hacked or maliciously manipulated.