Explainable AI Empowers Autonomous Ships to Show Their Work and Boost Maritime Safety

New research unveils an AI-powered navigation system that not only steers ships safely through crowded waters but also explains its every move, giving seafarers unprecedented insight into how and why smart vessels make split-second decisions.

Research: Explainable AI for ship collision avoidance: Decoding decision-making processes and behavioral intentions. Image Credit: Studio concept / ShutterstockResearch: Explainable AI for ship collision avoidance: Decoding decision-making processes and behavioral intentions. Image Credit: Studio concept / Shutterstock

The Titanic sank 113 years ago on April 14–15 after hitting an iceberg, a tragedy widely attributed to human error and poor situational awareness. Today, artificial intelligence (AI)–powered autonomous systems have the potential to help avoid such disasters. But can an AI system also explain its actions to a human captain?

The goal of explainable AI (XAI) is to make intelligent systems more transparent and trustworthy. Researchers from Osaka Metropolitan University’s Graduate School of Engineering have now developed an explainable AI model for ships that quantifies collision risks across all nearby vessels — an especially valuable feature as major sea lanes become increasingly congested.

A Transparent Approach to Maritime AI

Graduate student Hitoshi Yoshioka and Professor Hirotada Hashimoto designed the AI system to communicate its reasoning and intent to human operators clearly. Unlike traditional “black box” AI models, their system provides real-time numerical values representing collision risk, allowing crew members to understand why the AI behaves a certain way.

The model uses deep reinforcement learning to learn safe navigation strategies through trial and error in a virtual simulation environment. It was tested in various congested maritime scenarios, where it consistently avoided collisions with surrounding vessels.

How the AI Explains Itself

Key to the system is its breakdown of decisions into understandable components:

  • Risk Scoring: The AI calculates individual collision risks posed by nearby vessels.
  • Sub-task Critics: These components assess specific risks from individual ships.
  • Attention Mechanism: This highlights which vessels the AI considers most influential in its decision-making.

This layered, explainable approach helps bridge the gap between autonomous reasoning and human understanding, enabling maritime professionals to assess, trust, and collaborate with AI systems.

“By being able to explain the basis for the judgments and behavioral intentions of AI-based autonomous ship navigation, I think we can earn the trust of maritime workers,” said Professor Hashimoto. “I also believe that this research can contribute to the realization of unmanned ships.”

Future Impact

The visualizations and explanations provided by the system are designed for clarity and accessibility. They show both the perceived level of danger and which ships are being prioritized. By bringing transparency to autonomous navigation, the research lays important groundwork for human-AI collaboration in maritime settings and supports the broader goal of developing reliable unmanned ships.

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.