AI’s Black Box Problem: Why We Need Smarter, Transparent Explanations Now

As AI takes over critical decisions in banking, healthcare, and crime detection, a new study warns that opaque algorithms put lives and livelihoods at risk. The University of Surrey proposes a bold new framework to demand transparency and accountability in AI decision-making.

Research: Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design. Image Credit: Shutterstock AI

Research: Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design. Image Credit: Shutterstock AI

Are we putting our faith in technology that we don't fully understand? A new study from the University of Surrey comes at a time when AI systems are making decisions impacting our daily lives, from banking and healthcare to crime detection. The study calls for an immediate shift in how AI models are designed and evaluated, emphasizing the need for transparency and trustworthiness in these powerful algorithms. 

As AI becomes integrated into high-stakes sectors where decisions can have life-altering consequences, the risks associated with 'black box' models are greater than ever. The research sheds light on instances where AI systems must provide adequate explanations for their decisions, allowing users to trust and understand AI rather than leaving them confused and vulnerable. With cases of misdiagnosis in healthcare and erroneous fraud alerts in banking, the potential for harm – which could be life-threatening - is significant. 

Surrey's researchers detail the alarming instances where AI systems have failed to explain their decisions, leaving users confused and vulnerable adequately. With misdiagnosis cases in healthcare and erroneous fraud alerts in banking, the potential for harm is significant. Fraud datasets are inherently imbalanced - 0.01% are fraudulent transactions – leading to damage on the scale of billions of dollars. It is reassuring for people to know most transactions are genuine, but the imbalance challenges AI in learning fraud patterns. Still, AI algorithms can identify a fraudulent transaction with great precision but currently lack the capability to adequately explain why it is fraudulent. 

Dr Wolfgang Garn, co-author of the study and Senior Lecturer in Analytics at the University of Surrey, said: 

"We must not forget that behind every algorithm's solution, there are real people whose lives are affected by the determined decisions. Our aim is to create AI systems that are not only intelligent but also provide explanations to people - the users of technology - that they can trust and understand." 

The study published in the journal Applied Artificial Intelligence proposes a comprehensive framework known as SAGE (Settings, Audience, Goals, and Ethics) to address these critical issues. SAGE is designed to ensure that AI explanations are not only understandable but also contextually relevant to the end-users. By focusing on the specific needs and backgrounds of the intended audience, the SAGE framework aims to bridge the gap between complex AI decision-making processes and the human operators who depend on them. 

In conjunction with this framework, the research uses Scenario-Based Design (SBD) techniques. These techniques delve deep into real-world scenarios to discover what users truly require from AI explanations. This method encourages researchers and developers to put themselves in the shoes of the end users, ensuring that AI systems are crafted with empathy and understanding at their core. 

Dr Wolfgang Garn continued: 

"We also need to highlight the shortcomings of existing AI models, which often lack the contextual awareness necessary to provide meaningful explanations. By identifying and addressing these gaps, our paper advocates for an evolution in AI development that prioritises user-centric design principles. It calls for AI developers to engage with industry specialists and end-users actively, fostering a collaborative environment where insights from various stakeholders can shape the future of AI. The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change." 

The research highlights the importance of AI models explaining their outputs in a text form or graphical representations, catering to the diverse comprehension needs of users. This shift aims to ensure that explanations are not only accessible but also actionable, enabling users to make informed decisions based on AI insights. 

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.