LAISC Methodology Enhances AI Safety in Autonomous Systems

Learn how the groundbreaking LAISC methodology systematically addresses AI-specific safety risks, paving the way for safer autonomous technologies in real-world applications.

Research: Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems. Image Credit: alexfan32 / ShutterstockResearch: Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems. Image Credit: alexfan32 / Shutterstock

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.

In an article recently submitted to the arXiv* preprint server, researchers at the Technical University of Munich, Siemens AG, and the Fraunhofer Institute for Cognitive Systems IKS focused on addressing artificial intelligence (AI) safety concerns in autonomous systems by proposing a novel methodology called the Landscape of AI Safety Concerns (LAISC). This approach aimed to systematically identify and mitigate AI-specific safety risks, supporting the creation of safety assurance cases. The methodology's effectiveness was demonstrated through a case study involving a driverless regional train, showcasing its practicality in ensuring AI-based system safety.

Background

AI has advanced significantly, particularly in machine learning (ML), enabling systems to excel in complex tasks. However, ensuring safety in AI-based autonomous systems remains a critical challenge, especially for safety-critical applications like transportation. Traditional safety assurance cases, as outlined in standards such as the International Organization for Standardization (ISO)/ International Electrotechnical Commission (IEC)/ Institute of Electrical and Electronics Engineers (IEEE) 15026-1, are difficult to adapt for AI due to the "semantic gap"—the disparity between intended functionality and actual AI behavior. This gap arises from AI systems' reliance on complex models like deep neural networks (DNNs) and the inherent challenges of defining their operational domains.

Previous research has explored AI-specific safety concerns and proposed mitigation measures. However, these efforts lack a systematic approach to integrate AI-specific safety standards into comprehensive assurance cases. To address this gap, the paper introduced the LAISC methodology. LAISC systematically identifies, mitigates, and provides structured evidence for addressing AI-specific safety concerns while acknowledging its complementary role in broader safety assurance frameworks.

The concept of Landscape of AI Safety Concerns (LAISC).The concept of Landscape of AI Safety Concerns (LAISC).

Ensuring AI Safety with LAISC Methodology

The LAISC introduced a systematic methodology for ensuring the safety of AI-based systems by addressing AI-specific safety concerns. The methodology aimed to demonstrate the absence of AI-specific safety concerns through comprehensive analysis, measurable evidence, and structured processes. LAISC emphasized four core elements: a detailed and extensible list of AI-specific safety concerns, metrics and mitigation measures (M&Ms) to quantify and resolve safety gaps, the AI life cycle to align safety activities with system development stages, and verifiable requirements (VRs) to validate safety evidence.

AI-specific safety concerns, defined as issues negatively impacting system safety, included challenges like lack of robustness, poor model design choices, and lack of explainability. These issues were systematically addressed by identifying relevant AI-specific safety concerns for a specific use case, decomposing them into actionable goals, and applying targeted M&Ms throughout the AI life cycle. The importance of addressing these concerns early in the AI life cycle, as emphasized in the paper, reduces both risks and potential costs.

Metrics and M&Ms played a pivotal role in creating safety evidence. For instance, Confident Learning techniques were recommended to identify inaccurate data labels, and metrics like Intersection over Union (IoU) were suggested for semantic segmentation tasks to ensure appropriate evaluation. VRs bridged the gap between abstract concerns and measurable evidence, allowing for qualitative and quantitative evaluation of compliance. The authors stressed the need for both qualitative assessments, such as expert justifications for model design choices, and quantitative metrics for evaluating robustness and safety.

The LAISC process ensured transparency and auditability by representing AI-specific safety concern interactions in a tabular format, facilitating structured analysis. This methodology integrated interdisciplinary expertise, aligned with safety standards, and enhanced the robustness and reliability of AI-based systems. While LAISC improves confidence in safety assurance, it does not guarantee universal safety or predict harm probabilities.

LAISC process and argument pattern for demonstrating the absence of AI-SCs, consisting of four steps: 1) Initializing LAISC,2) Decomposing the AI-SC, 3) Derivation of Verifiable Requirements, and 4) Application of Metrics and Mitigation Measures along the AI life cycle

LAISC process and argument pattern for demonstrating the absence of AI-SCs, consisting of four steps: 1) Initializing LAISC,2) Decomposing the AI-SC, 3) Derivation of Verifiable Requirements, and 4) Application of Metrics and Mitigation Measures along the AI life cycle

Case Study

The case study applied the AI life cycle to a driverless regional train system, focusing on a track detector model that used semantic segmentation to classify pixels as part of the railway track or not. The analysis addressed three AI-specific safety concerns: inaccurate data labels, synthetic data issues, and model robustness.

  • Inaccurate data labels: The track detector model depended on labeled data for learning, and systematic labeling errors can affect performance. To mitigate this, the authors proposed a quality-controlled labeling process, including automated detection and correction of inaccurate labels and manual reviews of data labels. The model’s performance was assessed using metrics like mean IoU to ensure accuracy.

  • Synthetic data and reality gap: Synthetic data could lead to discrepancies between real and simulated environments. The researchers stressed that the model’s performance on both data types must align within a predefined threshold to avoid safety issues. To address this, Neural Activation Patterns (NAPs) were used to analyze the model’s perception of real versus synthetic data, providing additional insights into its robustness.

  • Lack of robustness: Robustness refers to the model’s ability to handle variations in input, such as sensor noise or weather conditions. The study outlined specific verifiable requirements to assess robustness against adversarial attacks, natural variations, and unknown conditions, ensuring the model performed reliably in real-world scenarios. The importance of a well-defined Operational Design Domain (ODD) was highlighted as crucial for specifying the conditions under which robustness is evaluated.

Conclusion

In conclusion, the researchers introduced the LAISC, a systematic methodology to address AI-specific safety risks in autonomous systems. LAISC focused on identifying, mitigating, and demonstrating the absence of AI safety concerns, supporting the creation of comprehensive safety assurance cases. The methodology was demonstrated through a driverless regional train case study, which tackled issues like inaccurate data labels, synthetic data discrepancies, and robustness challenges.

Key components of LAISC included tailored AI-specific safety concerns, metrics, mitigation measures, and verifiable requirements to ensure transparency and structured safety evidence. The paper emphasized that LAISC's strength lies in its ability to complement existing safety frameworks, enhancing confidence in safety assurance without providing a universal safety guarantee.

Future research will extend LAISC's application to additional use cases, refine its methodologies, and explore threshold settings for metrics and mitigation measures, advancing AI safety assurance across various domains.

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.

Journal reference:
  • Preliminary scientific report. Schnitzer, R., Kilian, L., Roessner, S., Theodorou, K., & Zillner, S. (2024). Landscape of AI safety concerns - A methodology to support safety assurance for AI-based autonomous systems. ArXiv.org. DOI:10.48550/arXiv.2412.14020, https://arxiv.org/abs/2412.14020
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2025, January 07). LAISC Methodology Enhances AI Safety in Autonomous Systems. AZoAi. Retrieved on January 08, 2025 from https://www.azoai.com/news/20250107/LAISC-Methodology-Enhances-AI-Safety-in-Autonomous-Systems.aspx.

  • MLA

    Nandi, Soham. "LAISC Methodology Enhances AI Safety in Autonomous Systems". AZoAi. 08 January 2025. <https://www.azoai.com/news/20250107/LAISC-Methodology-Enhances-AI-Safety-in-Autonomous-Systems.aspx>.

  • Chicago

    Nandi, Soham. "LAISC Methodology Enhances AI Safety in Autonomous Systems". AZoAi. https://www.azoai.com/news/20250107/LAISC-Methodology-Enhances-AI-Safety-in-Autonomous-Systems.aspx. (accessed January 08, 2025).

  • Harvard

    Nandi, Soham. 2025. LAISC Methodology Enhances AI Safety in Autonomous Systems. AZoAi, viewed 08 January 2025, https://www.azoai.com/news/20250107/LAISC-Methodology-Enhances-AI-Safety-in-Autonomous-Systems.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Machine Learning Boosts Earthquake Prediction Accuracy in Los Angeles