The Impact of AI Bias on Healthcare Decision-Making: A Comprehensive Study

In a recent publication in the journal Scientific Reports, researchers delved into the influence of biased artificial intelligence (AI) recommendations on human decision-making in medical diagnostics. The research encompassed three experiments, each exploring the effects of biased AI systems on participants' decision-making processes.Background

Study: The Impact of AI Bias on Healthcare Decision-Making: A Comprehensive Study. Image credit: Ground Picture/Shutterstock
Study: The Impact of AI Bias on Healthcare Decision-Making: A Comprehensive Study. Image credit: Ground Picture/Shutterstock

In recent decades, the proliferation of AI tools designed to aid decision-making has grown across various professional domains, including law, personnel selection, and healthcare. AI-based decision support systems for the healthcare industry have emerged with the promise of lowering clinical decision-making errors. This optimism is grounded in AI's remarkable precision in tasks such as image-based diagnostics, outcome prediction, and treatment recommendations. AI aids healthcare professionals in tasks such as diagnosis and triage by offering data-driven recommendations. This collaboration between human expertise and AI aims to enhance clinical decision-making and mitigate human cognitive biases and fatigue, ultimately improving diagnostic accuracy.

The incorporation of AI decision support systems in clinical settings has prompted questions regarding the likelihood of bias in medical judgments made with its help. People often perceive AI algorithms as objective and impartial, but AI systems inherit human-made errors and biases. While AI can address some human limitations, it introduces new challenges in human-AI interaction. Biased AI systems may even compromise the accuracy of clinical decisions when used collaboratively with humans.

Impact of AI Assistance on Decision-Making Accuracy

The first experiment examined the influence of explicitly biased recommendations from a fictitious AI on participants' behavior in a medical-themed classification task simulating image-based diagnosis. The 169 study participants gave their consent, and the University of Deusto's Ethical Review Board gave their approval. They were split into two groups: those who got no AI help (n = 84) and those who got AI help (n = 85). Both groups completed a classification task using fictitious tissue sample images to diagnose a fictional disease, Lindsay syndrome.

Each tissue sample contained 2,500 cells of two distinct colors, light yellow and dark pink, randomly distributed within the matrix to ensure no identical samples. The proportion of dark and light cells in each tissue sample varied, creating a diverse set of stimuli with varying levels of discrimination complexity. These stimuli represented a spectrum of proportions, including 80/20, 70/30, 60/40, 40/60, 30/70, and 20/80 for dark and light cells.

Results showed that the AI-assisted group, under the influence of biased AI recommendations, significantly erred in classifying the 40 over 60 samples compared to the unassisted group. Participants generally perceived AI as helpful and had moderate trust in AI for healthcare. Thus, the first experiment demonstrated that participants followed biased AI recommendations, leading to increased errors in a medical decision-making task.

The second experiment tries to validate the first's findings and more thoroughly examine how biased AI recommendations affect people's behavior. There were 199 participants in the experiment, equally divided between groups that got AI assistance (n = 100) and those that did not (n = 99). The procedure was like the first experiment, with participants performing a classification task using tissue samples. However, the second experiment introduced a second phase with 25 trials where both groups had to sort tissue samples without AI assistance. This phase also included ambiguous stimuli with a 50/50 dark/light cell ratio.

Results showed that even in Phase 2, where both groups performed without AI support, the AI-assisted group made significantly more misclassifications of the 40 over 60 stimuli than the unassisted group. Moreover, in Phase 2, the AI-assisted group continued to make more errors than the unassisted group. Additionally, the AI-assisted group tended to classify the ambiguous 50 over 50 stimuli in the same direction as the AI bias from the previous phase, indicating a generalization of the inherited bias to novel stimuli.

The third experiment aimed to replicate the bias inheritance observed in Experiment 2, focusing on the AI-assisted group's behavior during the non-assisted phase with 40 over 60 and 50 over 50 stimuli. This experiment also explored the impact of the order of AI-assisted and unassisted phases, hypothesizing that starting with the unassisted phase might protect against the AI's biased recommendations. There were 197 individuals split into two groups at random: AI-assisted → unassisted (n = 98) and unassisted → AI-assisted (n = 99). The procedure was like previous experiments, with slight modifications to adapt to the design of this experiment.

Results showed that participants in the AI-assisted → unassisted group made more errors in the 40 over 60 trials when they switched to the unassisted phase. When switching to the aided phase, the group that had previously completed the task without assistance did not exhibit a noticeably lower rate of errors. This suggests that the AI-assisted → unassisted group replicated the systematic errors of AI recommendations during the unassisted phase, supporting the inheritance bias effect.

Additionally, the average count of biased classifications of the 50 over 50 ambiguous stimuli in the unassisted phase was higher in the AI-assisted → unassisted group than in the unassisted → AI-assisted group, replicating the findings from Experiment 2.

The third experiment provided further evidence of the human inheritance of AI bias, with the order of AI-assisted and unassisted phases influencing participants' behavior. In this trial, participants' confidence in AI appeared to have a less significant impact on their responses.

Conclusion

In summary, researchers illustrated that AI-generated biased recommendations have a detrimental impact on human decision-making, especially in professional domains like healthcare. Furthermore, these biases persist, influencing human behavior even after the interaction with the biased AI system has ended and extended to novel situations. This phenomenon, termed the "inheritance bias effect," was explored through three experiments.

Journal reference:
Dr. Sampath Lonka

Written by

Dr. Sampath Lonka

Dr. Sampath Lonka is a scientific writer based in Bangalore, India, with a strong academic background in Mathematics and extensive experience in content writing. He has a Ph.D. in Mathematics from the University of Hyderabad and is deeply passionate about teaching, writing, and research. Sampath enjoys teaching Mathematics, Statistics, and AI to both undergraduate and postgraduate students. What sets him apart is his unique approach to teaching Mathematics through programming, making the subject more engaging and practical for students.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Lonka, Sampath. (2023, October 05). The Impact of AI Bias on Healthcare Decision-Making: A Comprehensive Study. AZoAi. Retrieved on November 21, 2024 from https://www.azoai.com/news/20231005/The-Impact-of-AI-Bias-on-Healthcare-Decision-Making-A-Comprehensive-Study.aspx.

  • MLA

    Lonka, Sampath. "The Impact of AI Bias on Healthcare Decision-Making: A Comprehensive Study". AZoAi. 21 November 2024. <https://www.azoai.com/news/20231005/The-Impact-of-AI-Bias-on-Healthcare-Decision-Making-A-Comprehensive-Study.aspx>.

  • Chicago

    Lonka, Sampath. "The Impact of AI Bias on Healthcare Decision-Making: A Comprehensive Study". AZoAi. https://www.azoai.com/news/20231005/The-Impact-of-AI-Bias-on-Healthcare-Decision-Making-A-Comprehensive-Study.aspx. (accessed November 21, 2024).

  • Harvard

    Lonka, Sampath. 2023. The Impact of AI Bias on Healthcare Decision-Making: A Comprehensive Study. AZoAi, viewed 21 November 2024, https://www.azoai.com/news/20231005/The-Impact-of-AI-Bias-on-Healthcare-Decision-Making-A-Comprehensive-Study.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Researchers Fine-Tune Open-Source AI to Rival GPT in Medical Evidence Summarization