How would you respond to the message below?
"BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL."
It sounds like the plot to a disaster movie. But people living in and visiting the Aloha state had to face the very real prospect of their impending doom when they received such a message at 8:07 am local time on January 13, 2018.
Of course, there was no incoming missile. It was an error made by an employee at the Hawaii Management Agency (EMA) during a preparedness drill and was retracted 38 minutes later. Still, for 38 long minutes, people in Hawaii really did think they were under attack – and, as their tweets go to show, it was terrifying.
The Centers for Disease Control (CDC) wanted to understand the impact of the alert on the general public and because it is 2019, they used Twitter as their modus operandi.
Researchers first collated tweets posted during the scare, using search terms like "missile Hawaii", "ballistic", "shelter", "drill", "threat", "alert", and "alarm". Retweets and quote tweets were immediately disqualified, leaving a final total of 14,530. These tweets were then divided into two – those posted before the retraction (8.07-8.45 am) and those posted after (8.46-9.24 am).
The team noticed four key themes apparent during the pre-retraction tweets. First, what they call "information processing". This essentially included any that revealed mental processing of the alert. For example, one read "Idk what’s going on.. but there’s a warning for a ballistic missile coming to Hawaii? [expletive deleted]".
The second was something they refer to as "information sharing", aka any attempt to try and circulate the alert. This would often include other people's Twitter handles – for example, "Just got an iPhone alert of inbound balistic [sic] missile in Hawaii. Said Not a Drill. @PacificCommand @DefenseIntel @WHNSC".
Another key trend was "authentication", which the researchers describe as an attempt to validate the alert, for example: "Is this missile threat real?" While the final theme was something the researchers call an "emotional reaction". This could include the expression of shock, fear, terror, or panic – for instance, "there’s a missile threat here right now guys. I love you all and I’m scared as [expletive deleted]".
Once the message had been corrected, the researchers observed three different themes: "denunciation", "insufficient knowledge to act", and "mistrust of authority", which were (unsurprisingly) "fundamentally different" to the tweets posted when people believed they were under attack and "reflect reactions and responses to misinformation".
"Denunciation" involved blaming the emergency warning and response. In particular, people were angry about the amount of time it took them to correct their mistake. One read, "How do you "accidentally" send out a whole [expletive deleted] emergency alert that says there’s a missile coming to Hawaii and to take cover. AND TAKE 30 MINUTES TO CORRECT?!?"
While "insufficient knowledge to act" referred to posts that mentioned a lack of response plan. For example, "Can you imagine waking up to an alert that says 'Take shelter there is a missile on the way', like Bruh. What shelter is there for a missile? That [expletive deleted] might as well say. "Aye Bruh. Missile on the way. Good luck".
Finally, "mistrust of authority" included any post that (you guessed it) expressed mistrust of authority following the spectacular blunder. "And now, should there be another ballistic missile threat, how can we trust it knowing the last one was a grave mistake??" is a perfect example.
The team does acknowledge certain limitations. For example, it may be hard to accurately assess the sincerity and tone of written statements. But they hope that by understanding the public's reaction to the message, they will be able to improve the way crisis alerts are developed and disseminated in the future.
Let's just hope it doesn't come to that.