AI Threats Shake National Security: Experts Hesitate or Act Recklessly in Crisis Scenarios

How does AI influence decision-making in high-stakes crises? A study of 700 security professionals finds that AI-driven threats trigger hesitation and doubt—except for those who fear total AI replacement, who act rashly. As AI reshapes security responses, how can we prevent critical misjudgments?

Research: Artificial Intelligence and the “Great Machine” Problem: Avoiding Technology Oversimplification in Homeland Security and Emergency Management. Image Credit: Mopic / ShutterstockResearch: Artificial Intelligence and the “Great Machine” Problem: Avoiding Technology Oversimplification in Homeland Security and Emergency Management. Image Credit: Mopic / Shutterstock

Artificial intelligence designed to influence our decisions is everywhere—in Google searches, online shopping suggestions, and movie streaming recommendations. But how does it affect decision-making in moments of crisis?

Virginia Commonwealth University researcher Christopher Whyte, Ph.D., investigated how emergency management and national security professionals responded during simulated AI attacks. The results reveal that the professionals were more hesitant and doubtful of their abilities when faced with completely AI-driven threats than when confronted with threats from human hackers or hackers who were only assisted by AI.

"These results show that AI plays a major role in driving participants to become more hesitant, more cautious," he said, "except under fairly narrow circumstances."

Whyte, an associate professor in VCU's L. Douglas Wilder School of Government and Public Affairs, is most concerned about those narrow circumstances.

National security organizations design their training programs to reduce hesitancy in moments of uncertainty. While most of the almost 700 American and European professionals in the study thought AI could boost human abilities, a small group believed AI could eventually fully replace their profession and human expertise in general. That group responded recklessly to the AI-based threat, accepting risks and rashly forging ahead.

"These are people that believe the totality of what they do - their professional mission and the institutional mission that they support - could be overtaken by AI," Whyte said.

Artificial intelligence: The next "Great Machine"

Whyte has a theory for why that may be the case.

The discredited "Great Man" theory proposes that strong political figures have mainly shaped the course of history, while modern historians give more credit to popular movements. Whyte suggests that history has also been shaped by transformative technological inventions, like the telegraph or radio waves, and misplaced faith in their power - what he has coined the "Great Machine" theory.

But unlike the "Great Man" theory, Whyte said, "Great Machines" are a shared, societal force that can be harnessed for society's benefit – or for its detriment.

"In the mid-1930s, for instance, we knew that radio waves had a great amount of potential for a lot of things," Whyte said. "But one of the early ideas was for death rays - you could fry your brain, and so on."

Death rays caught on, inspiring both science fiction stories and real-life attempts to build them during World War I and the interwar period. It wasn't until a few years before World War II that scientists began to build something practical with radio waves: radar.

Whyte said society currently faces the same problem with AI, which he calls a "general purpose" technology that could either help or hurt society. The technology has already dramatically changed how some people think about the world and their place in it.

"It does so many different things that you really do have this emergent area of replacement mentalities," he said. "As in, the world of tomorrow will look completely different, and my place in it simply won't exist because [AI] will fundamentally change everything."

That line of thinking could pose problems for national security professionals as new technology upends their perceptions of their abilities and changes their responses to emergency situations.

"That is the kind of psychological condition where we unfortunately end up having to throw out the rulebook on what we know is going to combat bias or uncertainty," Whyte said.

Combating "Skynet"-level threats

To study how AI affects professionals' decision-making abilities, Whyte recruited almost 700 emergency management and homeland security professionals from the United States, Germany, the United Kingdom, and Slovenia to participate in a simulation game.

During the experiment, the professionals were faced with a typical national security threat: A foreign government interfering in an election in their country. They were then assigned to one of three scenarios: a control scenario, where the threat only involved human hackers; a scenario with light, "tactical" AI involvement, where AI assisted hackers; and a scenario with heavy levels of AI involvement, where participants were told that a "strategic" AI program orchestrated the threat.

When confronted with a strategic AI-based threat—what Whyte calls a "Skynet"-level threat, referencing the "Terminator" movie franchise—the professionals tended to doubt their training and were hesitant to act. They were also more likely to ask for additional intelligence information than their colleagues in the other two groups, who generally responded to the situation according to their training.

In contrast, the participants who thought about AI as a "Great Machine" that could ultimately replace them acted without restraint and made decisions that contradicted their training.

While experience and education helped moderate how the professionals responded to the AI-assisted attacks, they didn't affect how they reacted to the "Skynet"-level threat. Whyte said that could pose a threat as AI attacks become more common.

"People have variable views on whether AI is about augmentation or whether it is something that's going to replace them," Whyte said. "And that meaningfully changes how people will react in a crisis."

Source:
Journal reference:
  • Whyte, Christopher. "Artificial Intelligence and the “Great Machine” Problem: Avoiding Technology Oversimplification in Homeland Security and Emergency Management" Journal of Homeland Security and Emergency Management, 2025. DOI:10.1515/jhsem-2024-0030

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.