In an article recently submitted to the ArXiv* server, researchers explored the relationship between causality and eXplainable Artificial Intelligence (XAI). Three key perspectives were identified in this study. The first one highlighted the absence of causality in current AI and XAI, the second one viewed XAI as a tool for causal inquiry, and the third one supported the causality's integral role in strengthening XAI. The authors also analyzed many software solutions to automatize causal tasks. The main goal of this research was to offer a consolidated aspect of these two fields and examine their potential intersections.
*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.
Background
The concepts of explanation and expression are intensely inherent in human thought and have gaps in the philosophy of science since ancient times. The concepts have taken different paths within the field of artificial intelligence (AI). In recent years, the growing field of XAI is required to give structured details to overcome limitations in black-box machine learning (ML) and deep learning (DL) models. In parallel, the combination of causality in ML and DL systems has been examined in seminal works within the causality domain. However, consent remains tricky about the severity of the interconnection between these two fields.
Review Methodology
The main objective of this review is to explore the literature on the complex relationship between causality and XAI. The review process suggests a structured approach that involves various steps, including revealing eligibility criteria, understanding information sources, developing a detailed search strategy over the selected databases, depicting the selection criteria, performing a high-level analysis of the selected studies, gathering pertinent data and observations from these studies, and finally, incorporating the findings.
The review was established on 51 peer-reviewed publications from conference proceedings and journals, and the exact technical process is described in the analysis. The evaluation of the literature followed a few key dimensions. First, a high-level analysis was done compacting on the co-occurrence of keywords in the selected records by applying bibliometric network analysis using Visualization of Similarities (VOS) Viewer. This confirmed the decision of the interrelation of distinct terms and concepts within the collection of scientific manuscripts.
The study of the research question itself concerned searching to find and extort applicable theoretical viewpoints and visions regarding the relationship between causality and XAI in the second dimension. This covered formalization frameworks and perspectives from AI, cognitive science, and philosophy. The organized data collection about any cited software solutions used for automatic causal tasks was directed during the detailed analysis of the full-text manuscripts in the third dimension. This structured collection contained particulars such as the software's web page URL, licensing information, the company responsible for commercial software, release publications, the interface type, and the primary field of application and benefit for each software tool.
Intersection of Causality and Explainable AI: Three Key Perspectives
This study revolved around the intersection of causality and XAI. It summarizes three discrete aspects: examining XAI from a causality angle, using XAI techniques for innovative assumption, and unifying creative approaches to upgrade XAI. The first aspect focuses on the limitations of current XAI, particularly its inadequacy of a groundwork in causality. The second perspective offers that XAI can benefit causal investigation by supplying starting points for hypotheses. The third perspective indicates that when models are stacked on a causal structure or their causal model is available, they naturally become intelligible and associate with the goal of XAI.
Additionally, some papers propose methods for building causal incorrect explanations by combining XAI and causality further. Overall, the analysis highlights the complex relationship between causality and XAI by giving beneficial observations for researchers and practitioners in these fields. A list of data mining software tools used in various papers was collected. The list consists of tools for causal discovery with Bayesian networks, structural causal modeling, and editing/analyzing directed acyclic graphs (DAGs) using DAGitty. These open-source tools allow flexibility for customization and revised security through collective code review. Interestingly, command-line interfaces are the approved choice as they provide speed and efficiency despite a steeper learning curve related to GUI options.
Conclusion
To sum up, this study explored the difficult interplay between causality and XAI by examining both theoretical and industrial aspects. The investigation provided three crucial prospects that illuminate the relationship between causality and XAI. The "Critics of XAI under the causality lens" perspective highlighted limitations in current XAI by questioning its foundation in causality.
In contrast, the "XAI for causality" viewpoint suggested that XAI could spark hypotheses about causal relationships despite its limitations. Finally, the "Causality for XAI" perspective advocated for causality as foundational to XAI and proposed three approaches for integration. While promising, this context faced challenges, yet collectively, these perspectives provide valuable insights into the interplay between causality and XAI, with the 'Causality for XAI' perspective holding significant potential for advancement in the field.
*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.