Interpretable Machine Learning Models in Critical Domains

In critical domains such as medicine, the criminal justice system, and financial markets, machine learning models have become pervasive. However, the inherent complexity and opacity of these models have raised concerns. While some propose model interpretability as a solution, there remains ambiguity regarding the precise definition and importance of interpretability. It raises fundamental questions: What constitutes interpretability, and why is it crucial?

Image credit: Generated using DALL.E.3
Image credit: Generated using DALL.E.3

What is Interpretability?

According to the Merriam-Webster dictionary, the verb "interpret" signifies the act of elucidating or presenting information understandably. In the context of machine learning systems, the focus shifts to delivering explanations to humans, signifying the process of elucidating or presenting information in a comprehensible fashion for human comprehension.

While the concept of an explanation may appear more intuitive than interpretable, the challenge remains to precisely define what constitutes an explanation. The psychology field provides valuable insights, where explanations are recognized as pivotal components of understanding and knowledge exchange. Defining explanations ranges from deductive-nomological perspectives to broader views encompassing implicit explanatory understanding. Interpretability is presented as a verification tool, often employed as a proxy for various other criteria in machine learning systems.

These criteria encompass notions of fairness, privacy, safety, reliability, robustness, causality, usability, and trust. Verification approaches may include formal proof for some properties or empirical validation for others. However, for auxiliary desiderata with elusive formal definitions, interpretation becomes a valuable tool to qualitatively assess whether these criteria are satisfied. Interpretability serves to qualitatively scrutinize and uncover potential concerns, aligning with the broader goals of machine learning systems.

History of Interpretable Models

Significant progress in research on interpretability has been made in recent years, but the foundation for learning interpretable models from data has deeper roots. Linear regression models date back to the early 19th century and have evolved into a wide range of regression analysis tools. These statistical models typically rely on distributional assumptions or pre-established model complexity constraints, emphasizing intrinsic interpretability.

In machine learning, a distinct approach prevails. Machine learning algorithms often employ non-linear, non-parametric methods, adjusting model complexity through hyperparameters selected via cross-validation. This flexibility can yield models with strong predictive performance but lower interpretability. Machine learning research gained momentum in the latter half of the 20th century with advancements in support vector machines, neural networks, and boosting.

Interpretability research, although relatively underexplored, has a long history within the machine learning community. Milestones such as the feature importance measure of random forests contributed to this field. The 2010s marked the rise of deep learning, followed by significant interpretability machine learning (IML) developments around 2015, with the introduction of various model-agnostic and model-specific explanation methods. Regression analysis and rule-based machine learning remain active research areas and are increasingly merging, serving as foundational components for many IML approaches.

Importance of Interpretability Models

In cases where a well-performing machine learning model prompts the question of whether one should rely on the model without delving into the rationale behind its decisions, it is imperative to consider the complexity of real-world tasks. As Doshi-Velez and Kim assert, relying solely on a single metric, such as classification accuracy, offers an incomplete portrayal of these tasks.

The need for interpretability in predictive modeling arises from a fundamental trade-off. Should one merely seek predictions, such as the likelihood of customer churn or a drug's effectiveness, or should one strive to comprehend the underlying reasons behind these predictions, even if this comprehension diminishes predictive accuracy? The answer is contingent upon the specific context. Interpretability may be superfluous in scenarios where the 'why' remains inconsequential, such as in a movie recommender system. However, in more critical domains, understanding 'why' a prediction is made becomes paramount.

Interpretability addresses various facets of human nature. Curiosity and learning are central. Humans update their mental models when unexpected events occur, driven by a quest for explanations. Similarly, humans seek meaning in their experiences and desire to reconcile contradictions in their knowledge structures. Interpretability becomes essential when machine decisions impact people's lives, as it helps bridge the gap between expectation and reality.

The transition to quantitative methods and machine learning in scientific disciplines amplifies the significance of interpretability. In situations where safety is paramount, interpretability becomes indispensable for validating models and ensuring their correctness. Moreover, interpretability aids in identifying and mitigating biases that machine learning models may inherit from their training data.

In the process of integrating machines and algorithms into daily life, interpretability plays a pivotal role in enhancing social acceptance. People tend to anthropomorphize objects, attributing beliefs, desires, and intentions to them. Machines that explain their predictions tend to gain greater acceptance in such contexts.

Furthermore, interpretability is vital for examining traits such as fairness, privacy, reliability, causality, and trust in machine learning models. It allows for the detection of bias, protection of sensitive data, ensuring model robustness, verifying causal relationships, and fostering human trust.

Properties of Interpretable Models

The discussion now delves into the techniques and model characteristics proposed for facilitating or constituting interpretations, broadly categorized into two groups: transparency and post hoc explanations.

In the context of machine learning models, transparency entails understanding the model's inner workings. It encompasses several dimensions, such as simulatability, decomposability, and algorithmic transparency.

Simulatability: This aspect defines model transparency based on a person's ability to comprehend the entire model simultaneously. This concept acknowledges that simpler models are often more interpretable, and it is subjective to what extent "reasonable" spans human cognitive capacity.

Decomposability: Transparency means that each part of the model should be easy to explain. This concept aligns with intelligibility, where each aspect of the model, including its nodes, parameters, and calculations, should have an intuitive explanation. However, this notion requires inputs themselves to be individually interpretable, which might disqualify models with highly engineered or obscure features.

Algorithmic Transparency: Algorithmic transparency pertains to understanding the learning algorithm itself. For instance, in the case of linear models, one can comprehend the error surface and prove that training will converge to a unique solution. This transparency provides confidence in a model's behavior, especially in online settings.

Post hoc interpretations provide an alternative approach to extracting information from machine learning models. These interpretations often do not elucidate precisely how the model functions but offer valuable information. Common approaches to post hoc interpretations include text explanations, visualization, local explanations, and explanation by example.

Text Explanations: Human justification of decisions through verbal explanations is analogous to this approach. It involves training a separate model to generate explanations for the model's decisions. These textual explanations may not fully reflect the model's actual decision process but offer useful insights.

Visualization: Visualizations, such as t-distributed Stochastic Neighbor Embedding (t-SNE) for high-dimensional representations, aim to provide qualitative insights into what a model has learned. They help identify patterns in data, even from high-level representations and can reveal the model's understanding.

Local Explanations: Local explanations, like saliency maps, highlight regions of the input that have the most influence on the model's output. However, these are local interpretations and may change with minor input variations. In contrast, linear models capture global relationships between inputs and outputs.

Explanation by Example: This method shows which examples the model considers most similar. It involves identifying the k-nearest neighbors based on their proximity in the learned model space. This approach draws parallels with how humans justify actions by analogy and is supported by case studies, especially in medical fields.

Challenges of Interpretable Models

Statistical Uncertainty and Inference: Numerous interpretable methods, such as permutation feature importance and Shapley values, provide explanations but often lack quantification of explanation uncertainty. Current research strives to quantify uncertainty in explanations, focusing on feature importance, layer-wise relevance propagation, and Shapley values. To ensure the integrity of interpretable models, it is essential to articulate structural and distributional assumptions, whether in classical statistical models or machine learning algorithms.

Causal Interpretation: Ideally, models should capture the true causal structure of phenomena, but most statistical learning procedures primarily reflect correlation structures. Balancing the model's ability to predict outcomes with understanding causal relationships is still a challenge. Research has begun to explore when causal interpretations are permissible, with initial steps taken for permutation feature importance and Shapley values.

Definition of Interpretability: The lack of a clear definition for "interpretability" hinders the field's progress. Evaluating interpretability is complex, as there is no ground truth for explanations. Emerging quantifiable aspects of interpretability include sparsity, interaction strength, fidelity, sensitivity to perturbations, and simulatability. Two main evaluation approaches are objective, mathematically quantifiable metrics and human-centered evaluations involving domain experts or laypersons. The challenge is to establish best practices for evaluating interpretation methods and their explanations.

References and Further Readings

Doshi-Velez, F., and Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. DOI: https://doi.org/10.48550/arXiv.1702.08608

Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue16(3), 31-57. DOI: https://doi.org/10.1145/3233231

Molnar, C. (2020). Interpretable machine learning. https://christophm.github.io/interpretable-ml-book (Accessed on 20 Oct 2023).

Gao, L., and Guan, L. (2023). Interpretability of Machine Learning: Recent Advances and Future Prospects. Arxiv. https://doi.org/10.48550/arXiv.2305.00537

Last Updated: Oct 24, 2023

Dr. Sampath Lonka

Written by

Dr. Sampath Lonka

Dr. Sampath Lonka is a scientific writer based in Bangalore, India, with a strong academic background in Mathematics and extensive experience in content writing. He has a Ph.D. in Mathematics from the University of Hyderabad and is deeply passionate about teaching, writing, and research. Sampath enjoys teaching Mathematics, Statistics, and AI to both undergraduate and postgraduate students. What sets him apart is his unique approach to teaching Mathematics through programming, making the subject more engaging and practical for students.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Lonka, Sampath. (2023, October 24). Interpretable Machine Learning Models in Critical Domains. AZoAi. Retrieved on November 23, 2024 from https://www.azoai.com/article/Interpretable-Machine-Learning-Models-in-Critical-Domains.aspx.

  • MLA

    Lonka, Sampath. "Interpretable Machine Learning Models in Critical Domains". AZoAi. 23 November 2024. <https://www.azoai.com/article/Interpretable-Machine-Learning-Models-in-Critical-Domains.aspx>.

  • Chicago

    Lonka, Sampath. "Interpretable Machine Learning Models in Critical Domains". AZoAi. https://www.azoai.com/article/Interpretable-Machine-Learning-Models-in-Critical-Domains.aspx. (accessed November 23, 2024).

  • Harvard

    Lonka, Sampath. 2023. Interpretable Machine Learning Models in Critical Domains. AZoAi, viewed 23 November 2024, https://www.azoai.com/article/Interpretable-Machine-Learning-Models-in-Critical-Domains.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.