Implementing Risk Management in AI Systems

Artificial intelligence (AI) technologies possess enormous potential to revolutionize people’s lives and society. These technologies can aid in scientific advancements and enable inclusive economic growth. Yet, AI poses risks that could affect the planet, the environment, society, communities, organizations, groups, and individuals.

Image Credit: Deemerwha studio/Shutterstock.com

Although the European Commission, Parliament, and Council define AI slightly differently, all three definitions are primarily based on a common concept of a machine-based system generating outputs that influence the environment, given implicit or explicit objectives. Thus, AI risk management revolves around the responsible use and development of AI systems.

Benefits and Risks of AI

AI systems are inherently socio-technical, as human behavior and societal dynamics influence them. AI benefits and risks emerge from the interplay of technical aspects coupled with societal factors related to the social context for a system’s deployment, the individuals operating it, and its interactions with other AI systems.

The European Commission's AI Act aims to maximize AI's benefits while minimizing its risks. Advancements in medicine, enhanced productivity, and better agriculture are some of AI's key benefits and opportunities.

However, AI also entails many risks, such as gender-based or other forms of discrimination, intrusion into people’s private lives, opaque decision-making, and criminal misuse. Such risks can be mitigated by responsibly using trustworthy AI systems.

Risk-Based Regulatory Approaches

The European Commission’s proposal adopts a “risk-based” approach, which makes the regulatory burden proportionate to the risk in order to respond to these risks. A tiered regulatory approach has been proposed in which unacceptably risky AI systems like manipulative systems are prohibited.

Additionally, low-risk AI systems only attract light transparency obligations for specific uses. The regulation of high-risk AI systems, which pose risks to safety or fundamental rights, constitutes the bulk of the Proposal. High-risk AI systems include systems that impact access to government, jobs, education, and other essential services, along with the systems utilized in law enforcement.

Although each risk tier’s precise contours differ in the European Council and Parliament’s amendments, as the Parliament would expand the high-risk AI systems definition to cover systems posing substantial risks to safety, health, or the environment, the tiered approach has been a common ground for all.

In all three texts of the Commission, Council, and Parliament, high-risk AI systems have been subject to a set of risk-control requirements, including accuracy, oversight, documentation, data governance, explanation, and risk management.

However, the AI Act assumes that even if high-risk AI system providers comply with the requirements, a number of risks will remain, thereby all risks cannot be reduced to an acceptable level. Thus, the Act states that providers must identify those residual risks and implement additional measures to reduce them to an acceptable level.

Principles of Risk Management

Risk management measures/responses/treatments are actions taken to reduce the evaluated and identified risks. The proposed European AI Act contemplates that its regulatory requirements to corresponding risk levels will be fulfilled through a “conformity assessment” against European Standards Organization-approved harmonized standards, which is consistent with Europe’s New Legislative Framework.

Although the default will be self-certification, provisions could be made for independent third-party certification. The Act states that risk management will consider the generally acknowledged state-of-the-art measures, including those reflected in relevant common specifications or harmonized standards.

The AI Act's risk management process is based on the iterative process of risk reduction and assessment described in ISO/IEC Guide 51. The key steps include identifying users, intended uses, and foreseeable misuses (Step 1), analyzing and identifying foreseeable and known risks (Step 2), and adopting risk management measures (Step 3).

Although these steps have been presented sequentially, they are intended to be iterative. The risk management process must be iterated until all risks are reduced to an acceptable level. Providers must decide if the risk is already acceptable after the first two steps through estimation and evaluation of potential risks emerging from foreseeable misuses/intended uses or risks identified during post-market monitoring.

If the risk is acceptable, they can complete the process by documenting their decision. Otherwise, the providers must move on to the third step to adopt suitable risk management measures and reassess the risk to decide whether the residual risks are acceptable/estimated and evaluated.

If the risks are unacceptable, additional risk management measures must be adopted. However, if the residual risks cannot be reduced to an acceptable level even after the measures are adopted, the deployment and development process must be stopped.

Human factors, activities, and tasks are prevalent throughout the AI lifecycle dimensions. They include human-centered design methodologies and practices, promoting active involvement of relevant AI actors, end users, and other interested parties, adapting and evaluating end-user experiences, incorporating context-specific values and norms in system design, and broad integration of human dynamics and humans in all AI lifecycle phases.

Human factors professionals offer multidisciplinary perspectives and skills to design and evaluate user experience, engage in consultative processes, inform impact assessments, perform human-centered testing and evaluation, inform demographic and interdisciplinary diversity, and understand the context of use.

Thus, the integration of diverse stakeholder perspectives throughout the AI lifecycle is essential for the consensus-driven development of AI systems and their regular updating through an open, transparent process.

Ethical and Societal Considerations

Using AI in policing, business, and administration poses high risks to fundamental rights, such as equality and privacy. AI can also perpetuate and amplify discrimination and biases, which can significantly impact individuals in vulnerable situations. Regulating AI is difficult due to the complexity of the technology, under-resourced regulators, and significant economic consequences.

Many AI systems are nonlinear, unpredictable, and opaque, which increases the fairness and transparency challenges. Additionally, many AI developers ignore potential legal problems or lack legal expertise, and often possess more resources than monitoring and regulating authorities.

Thus, governments typically prioritize innovation by sacrificing societal values without societal considerations. AI regulatory frameworks cannot ensure sufficient accountability of AI providers when they foresee small fines for AI misuse cases as a small dent in profits is treated as a cost of doing business, incentivizing companies to take risks with AI systems and prioritize profit over ethical and safety considerations.

A more proactive approach, which would require companies to fulfill specific ethical and safety standards before AI system deployment, could be more effective in preventing harm and ensuring accountability, including purpose limitation and fairness.  Risk management systems must be developed with societal and ethical considerations to reduce ethical and societal risks, along with the risks to the high-risk AI system providers.

In risk governance, setting the acceptable risk threshold is one of the most controversial and challenging tasks. Questions regarding the acceptability of risks to human rights have become political, normative, social, and contextual, which indicates the importance of human rights in AI governance.

Implementation Challenges and Future Outlook

Several practical challenges must be considered while implementing AI risk management frameworks to manage risks in pursuit of AI trustworthiness. These include risk measurement, risk tolerance, risk prioritization, and organizational integration and management of risk challenges. The role of standards organizations also creates an additional implementation challenge.

For instance, according to the European Commission’s AI Act, the conformity assessment must include the implementation of a risk management system. Balancing AI risk management frameworks’ innovation with safety and ethical considerations is another significant challenge.

A paper recently published in Computer Law & Security Review proposed an AI licensure model based on ex-ante justification to address the limitations of the European AI Act based on an ex-ante model/conformity assessment before commercialization.

The AI Act is limited in its scope, substance, and transparency, while the proposed licensure model mandates that a high-risk AI provider must certify that its AI system meets clear requirements for correctability, appropriateness, accuracy, non-discrimination, and security to obtain a license.

In conclusion, AI risk management is crucial for harnessing AI's benefits while mitigating its risks. This involves understanding potential harms, implementing robust frameworks, and striking a balance between innovation and safety.

References and Further Reading

Malgieri, G., Pasquale, F. (2024). Licensing high-risk artificial intelligence: Toward ex-ante justification for a disruptive technology. Computer Law & Security Review, 52, 105899. DOI: 10.1016/j.clsr.2023.105899, https://www.sciencedirect.com/science/article/pii/S0267364923001097

Giudici, P., Centurelli, M., Turchetta, S. (2023). Artificial Intelligence risk measurement. Expert Systems With Applications, 235, 121220. DOI: 10.1016/j.eswa.2023.121220, https://www.sciencedirect.com/science/article/pii/S0957417423017220

Schuett, J. (2023). Risk Management in the Artificial Intelligence Act. European Journal of Risk Regulation, 1–19. DOI:10.1017/err.2023.1, https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/risk-management-in-the-artificial-intelligence-act/2E4D5707E65EFB3251A76E288BA74068

Fraser, H., Bello y Villarino, J.-M. (2023). Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough. European Journal of Risk Regulation, 1–16. DOI:10.1017/err.2023.57, https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/acceptable-risks-in-europes-proposed-ai-act-reasonableness-and-other-principles-for-deciding-how-much-risk-management-is-enough/97720BC04BF5F43721392FC23BFF4B2E

AI, N. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf

Last Updated: Aug 12, 2024

Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2024, August 12). Implementing Risk Management in AI Systems. AZoAi. Retrieved on November 23, 2024 from https://www.azoai.com/article/Implementing-Risk-Management-in-AI-Systems.aspx.

  • MLA

    Dam, Samudrapom. "Implementing Risk Management in AI Systems". AZoAi. 23 November 2024. <https://www.azoai.com/article/Implementing-Risk-Management-in-AI-Systems.aspx>.

  • Chicago

    Dam, Samudrapom. "Implementing Risk Management in AI Systems". AZoAi. https://www.azoai.com/article/Implementing-Risk-Management-in-AI-Systems.aspx. (accessed November 23, 2024).

  • Harvard

    Dam, Samudrapom. 2024. Implementing Risk Management in AI Systems. AZoAi, viewed 23 November 2024, https://www.azoai.com/article/Implementing-Risk-Management-in-AI-Systems.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.