As AI-powered cyber threats grow more sophisticated, organizations must strategically invest in AI-driven defenses to stay ahead of attackers. The evolving game-theoretic battle between defenders and hackers is reshaping cybersecurity decision-making.
Image Credit: ozrimoz / Shutterstock
In today's world of interconnected computer-based information systems, cyber risk has become one of the critical risk factors impacting organizations. Indeed, several studies have shown that cyber risk (i.e., the probability of being the victim of a successful cyber-attack) is one of the, if not the top, risk concerns to senior executives in private, as well as public sector organizations. Auditors have also recognized the critical nature of cyber risk to organizations, as evidenced by the American Institute of Public Accountants' development of its cybersecurity risk management reporting framework. Cybersecurity risk is also a key concern to the U.S. Securities and Exchange Commission (SEC), as evidenced by its 2023 disclosure rules requiring registrants to include Item 1C (Cybersecurity) in Form 10-K and to disclose material cyber incidents in Form 8-K.
AI Models
Organizations use a technical arsenal to manage their cyber risk, including encryption, access controls, intrusion detection and prevention systems, firewalls, and system restoration. Over the last two decades, AI (artificial intelligence) models have been widely used to assist organizations in implementing these methods to prevent and respond to cyberattacks. For example, AI-generated machine learning models facilitate intrusion detection and correction, predictive analytics, financial fraud detection, and real-time responses to cyber incidents.
Although AI assists organizations in defending against cyberattacks, it is a double-edged sword. More to the point, AI also provides cyber attackers with an array of cost-efficient techniques that facilitate their attacks. Sophisticated AI-generated phishing attacks, social engineering attacks, and ransomware attacks are just a few of the ways AI has made the cyber-attack landscape more lethal.
Game-Theoretic Aspects of Cyber Risk
AI-generated models used by cyber attackers and cyber defenders have been evolving rapidly. As a result, their strategic interactions have become more automated, dynamic, adaptive, and complex. These developments have increased and substantially changed the game-theoretic aspects associated with cyber risk.
Unfortunately, no dominant strategy gives an organization (as a cyber defender) a clear path to minimize the probability of becoming a victim of a successful cyberattack. Notwithstanding the above, it is well known that organizations become less attractive targets to cyber hackers (i.e., their cyber risk is lowered) by investing in a variety of cybersecurity-related activities. This raises the following fundamental question: How much should an organization invest to prevent, or at least reduce, the probability of a cyber incident?
Cost-Benefit Considerations
Although there is no definitive answer to the above question, a well-established framework for deriving the optimal amount to invest in cybersecurity-related activities is provided by the Gordon-Loeb Model. The Gordon-Loeb Model, which is based on cost-benefit analysis, consists of the following three main components: (1) the potential cost associated with a cyber incident, (2) the probability that a cyber incident will occur, and (3) the benefits derived from investments in cybersecurity (i.e., how spending on cybersecurity reduces the probability that a cyber incident will occur).
Besides considering the total amount to spend on cybersecurity-related activities, organizations should consider the costs associated with AI models. For example, how much of their cybersecurity-related budget should be devoted to developing and implementing AI models designed to reduce the likelihood of a cyber incident?
The costs of developing and implementing new AI models designed to reduce the likelihood of a cyber incident depend on many organizational-specific factors. These factors include but are not necessarily limited to: (1) whether the organization has to develop specialized AI models or could use existing open-source AI models, (2) whether the organization needs to hire new personnel to develop and implement the AI models, and (3) whether new software and/or hardware is required in order to properly integrate the AI models into an organization's existing information systems.
Concluding Comment
Ultimately, the economic aspects of managing an organization's cyber risk program need to consider both the costs and benefits associated with defending against cyber-attacks. However, given the increasing utilization of AI-generated models by cyber attackers and cyber defenders, the game-theoretic aspects of cyber risk have taken on new dimensions. The winners in this new game will likely be those most familiar with developing and implementing AI models.
Lawrence A. Gordon is the EY Alumni Professor of Managerial Accounting and Information Assurance at the Robert H. Smith School of Business, University of Maryland (UMD). He is also an Affiliate Professor in the UMD Institute for Advanced Computer Studies.