In a paper published in the journal Humanities and Social Sciences Communications, researchers examined the implications of the black box problem in clinical artificial intelligence (AI) for health professional-patient relationships, drawing on African scholarship's perspectives on trust.
The black box problem complicated health professionals' ability to explain how AI incorporated patient values, potentially eroding trust as a relational and normative concept. As the black box issue challenged transparency and accountability, further research was needed to understand its impact on the trust dynamics crucial for the global acceptance of clinical AI in medical contexts.
Background
Healthcare is positioned to become a significant consumer of AI, mainly clinical AI, due to its potential to process complex datasets quickly and objectively, surpass human capabilities, and identify subtle or unnoticed patterns. Clinical AI promises to enhance patient precision care, disease detection, workflow optimization, and cost reduction. Studies have shown that clinical AI can outperform human surgeons in specific tasks. For healthcare to widely adopt AI, addressing the challenges posed by the black box problem in AI is imperative.
Understanding the Black Box and Explainability
This section tackles the notions of the black box, the black box problem, and the importance of explainability, particularly in the context of health professional-patient relationships. Researchers classified a clinical AI system as a black box lacking interpretability or explanation. Notably, not all black-box clinical AI systems necessarily generate a black-box problem, as there are various ways in which a clinical AI may be unexplainable.
Firstly, clinical AI falls into this category if it needs post-hoc interpretability, meaning it cannot provide normative justification or transparent information about its purpose. It is essential for stakeholders, including health professionals and patients, to form informed judgments about its use. Achieving design transparency and publicity is at the core of this normative justification. Patients need to make informed decisions regarding their care, as patient-centered care models emphasize. Therefore, while clinicians may not need to understand the internal workings of AI like engineers, design publicity remains crucial to fulfil disclosure requirements and move toward patient-centered care, avoiding machine paternalism.
Secondly, an unexplainable nature characterizes a black-box clinical AI when it renders its internal operations and simulations impenetrable, making it inaccessible to developers, patients, and health professionals involved in making informed decisions. This opacity poses ethical challenges, with one study highlighting the communication of AI predictions as the most common issue. The unexplainable nature of clinical AI only changes if the system becomes interpretable in the future; what matters is the current inaccessibility of its inner workings. Due to its theory-agnostic nature, black-box clinical AI, often based on deep neural networks, creates a black-box problem.
Despite the ability to observe inputs and outputs, these AI tools still need to be made public regarding the decision-making process and statistical associations, making it impossible to understand how they arrive at predictions. This opacity becomes a significant issue when patients seek explanations for AI-driven recommendations, especially in situations that involve their values, preferences, or ethical concerns. At the same time, this article centers on the insights from African scholarship on trust in the black box problem presented by clinical AI.
Some scholars assert that explainability should only hinder the use of clinical AI if strong statistical validation exists, similar to how people embrace specific medical treatments without completely comprehending their mechanisms. However, black-box clinical AI's implications and potential impact on healthcare and trust remain subjects for further exploration. The forthcoming research will focus on delving into Afro-communitarian values to address these critical questions. At the same time, this article centers on the insights from African scholarship on trust in the black box problem presented by clinical AI.
Affirming African Scholarship in Clinical AI
Researchers underscore the importance of incorporating African perspectives into discussions surrounding trust and the black box problem in clinical AI. It emphasizes the lack of attention given to African scholarship in these discussions and underscores the significance of understanding trust in the African healthcare context. The article raises critical questions about trust, its impact on clinical AI acceptance, and the unique African conceptualizations of trust, focusing on relational, experience-based, and normative aspects. It stresses the need for clinical AI to be transparent and explainable to promote trust and genuine fiduciary relationships between healthcare professionals and patients, especially within deep relationships.
In African scholarship, trust is linked to trustworthiness, emphasizing the role of personal knowledge and life experiences in forming trust. This perspective aligns trust with experiences and moral obligations. However, when clinical AI operates as a black box, it hinders health professionals' ability to meet trust expectations, obscures the AI's workings, and compromises trust, accountability, and patients' autonomy. Explaining clinical AI becomes crucial to maintaining trust and ensuring ethical healthcare practices.
Conclusion
To sum up, this paper introduces an African perspective to the discussion on the challenges black-box clinical AI poses. Within the context of a black-box AI, the African conception of trust highlights issues related to vulnerability, autonomy, and trust within health professional-patient relationships. As clinical AI assumes an increasingly vital role in healthcare, it becomes crucial to understand how various stakeholders react to different AI formats and identify the requirements for global acceptance. Moreover, exploring the moral boundaries of healthcare automation is an essential direction for future research.