Demystifying AI: A Comprehensive Overview of eXplainable AI (XAI) for Trustworthy Decision-Making

In an article in the press with the journal Information Fusion, the authors performed a comprehensive review of eXplainable artificial intelligence (XAI), including the current trends and research in this field and concerns about XAI.

Background

AI is currently used in several sophisticated applications as it enables data-driven decision-making, which can assist in resolving complex issues and reforming obsolete methods. However, the outcomes of several AI models are difficult to trust and comprehend owing to their black-box nature.

Study: Demystifying AI: A Comprehensive Overview of eXplainable AI (XAI) for Trustworthy Decision-Making. Image Credit: Blue Planet Studio / Shutterstock
Study: Demystifying AI: A Comprehensive Overview of eXplainable AI (XAI) for Trustworthy Decision-Making. Image Credit: Blue Planet Studio / Shutterstock

Understanding the reasoning behind the decision-making of AI models is crucial to increase trust in their outcomes, which has increased the need for XAI methods. In recent years, XAI has gained significant attention in the field of AI. Although several survey papers have discussed the general terms of XAI, its concepts, and post-hoc explainability methods, reviews on available XAI tools and assessment methods have not been performed until now.

In this paper, the authors comprehensively reviewed the field of XAI, including the current trends and research in this rapidly emerging field. They proposed a novel four-axes framework to investigate the training process and refine the AI model for improved trustworthiness and robustness. These four axes include model explainability, post-hoc explainability, data explainability, and assessment of explanations.

Four-axes methodology to examine AI models for explainability

Data explainability

Data explainability involves understanding the training datasets used in designing and training AI models using a group of techniques. This explainability is extremely crucial as the behavior of an AI model is significantly influenced by the dataset used for its training.

Several interactive data analysis tools can be used to understand the input data. Thus, data explainability can offer insights enabling AI systems to become more efficient, robust, and explainable. Key aspects of data explainability include knowledge graphs, data summarizing methodologies, dataset description standardization, exploratory data analysis (EDA), and explainable feature engineering.

Model explainability

Model explainability can facilitate the creation of more understandable models. This explainability can limit the AI model selection to explainable model families/a particular family of AI models that are inherently explainable.

Post-hoc explainability

Neural methods, knowledge extraction methods, game theory methods, example-based explanation methods, visualization methods, and attribution methods are used in post-hoc explainability.

Attribution methods

During image processing, most of the attribution methods depend on the pixel associations to identify the pixels of a training input image that are relevant for the model activation. An attribution value is given to every input image pixel, which is referred to as its contribution/relevance. Attribution methods can be classified into deep lift, backpropagation methods, perturbation methods, and deep Taylor decomposition.

Visualization Methods

Visualization methods, such as partial dependence plots (PDP), accumulated local effects (ALE), and individual conditional expectations, are used to visualize the representations of an AI model to investigate the underlying patterns that can help understand the model. Visualization methods are used with supervised learning models.

Example-based explanations/case-based explanations are generated through counterfactuals, adversarial examples, and prototypes and criticisms.

Game theory methods

In game theory methods, the ‘‘game’’ is primarily a single instance of a prediction made by a dataset in a task. The ‘‘gain’’ is the difference between the average of predictions for all instances and the actual prediction for the given prediction in the dataset. The ‘‘players’’ are the feature values of the model who work collectively to obtain the gain.

Knowledge extraction and neural methods

Knowledge extraction methods extract information from black-box models. These methods depend on model distillation and rule extraction techniques. Neural network interpretation techniques simplify neural networks, visualize the concepts and features that a neural network has learned, or explain specific predictions.

Assessment of explanations involves evaluation of explainability using different XAI assessment methodologies, including cognitive psychological theories, understandability and satisfaction, trust and transparency, assessment by human-AI interface, and computational assessment.

Available XAI research software tools

Several software packages supporting multiple methods, such as OmniXAI, AIX360, InterpretML, Alibi, Interpretable ML (IML), H2O, iNNvestigate, modelStudio, and Captum, are available for XAI research. Among these packages, OmniXAI supports the highest number of methods, including PDP, ALE, layer-class activation map (CAM), integrated gradient, score-weighted CAM (score-CAM), gradient-weighted CAM (grad-CAM), GuidedBackProp, SmoothGrad, counterfactual explanations, sensitivity analysis, and locally interpretable model-agnostic explainer (LIME), and contrastive analysis. This open-source XAI library provides two to 10 methods for every input data type, including time series, text, and image. 

Currently, several efforts are being made to develop and regulate trustworthy AI systems. Researchers in the field of XAI are developing tools for validation, debugging, and exploration of AI models. Depending on specific metrics, these tools allow users to test AI models with different structures and choose the most suitable model for their task.

Concerns about XAI

The greater proliferation of AI has increased concerns about its impact on people’s daily lives. Currently, most of the studies on interpreting and explaining AI systems are motivated by the requirements of developers in place of users. Moreover, AI is also being used to develop automated decision-making systems based on the personal information of users, which has increased concerns among users.

Thus, AI systems must be validated with actual users to ensure fairness, transparency, and accountability. The trustworthiness of AI systems is another major concern for users, with many users not in favor of using AI in critical functions, such as surgeries.

This issue can be addressed by developing AI systems that make easily understandable decisions and offer good explanations about the reasons behind the decision. Governments are also concerned with the rising use of AI systems in high-risk applications, such as medicine and self-driving, where one incorrect outcome can lead to single/multiple fatalities.

To summarize, good explanations of AI systems are the core of open and responsible AI research, which necessitates higher investments by industries and practitioners in XAI to ensure that the decisions by AI systems can be adequately explained.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2023, July 09). Demystifying AI: A Comprehensive Overview of eXplainable AI (XAI) for Trustworthy Decision-Making. AZoAi. Retrieved on November 21, 2024 from https://www.azoai.com/news/20230709/Demystifying-AI-A-Comprehensive-Overview-of-eXplainable-AI-(XAI)-for-Trustworthy-Decision-Making.aspx.

  • MLA

    Dam, Samudrapom. "Demystifying AI: A Comprehensive Overview of eXplainable AI (XAI) for Trustworthy Decision-Making". AZoAi. 21 November 2024. <https://www.azoai.com/news/20230709/Demystifying-AI-A-Comprehensive-Overview-of-eXplainable-AI-(XAI)-for-Trustworthy-Decision-Making.aspx>.

  • Chicago

    Dam, Samudrapom. "Demystifying AI: A Comprehensive Overview of eXplainable AI (XAI) for Trustworthy Decision-Making". AZoAi. https://www.azoai.com/news/20230709/Demystifying-AI-A-Comprehensive-Overview-of-eXplainable-AI-(XAI)-for-Trustworthy-Decision-Making.aspx. (accessed November 21, 2024).

  • Harvard

    Dam, Samudrapom. 2023. Demystifying AI: A Comprehensive Overview of eXplainable AI (XAI) for Trustworthy Decision-Making. AZoAi, viewed 21 November 2024, https://www.azoai.com/news/20230709/Demystifying-AI-A-Comprehensive-Overview-of-eXplainable-AI-(XAI)-for-Trustworthy-Decision-Making.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI Boosts Global Flood Forecasting, Enhancing Accuracy and Disaster Response