Navigating Bias in AI for Health Equity: An Extended TPLC Model

The paramount aim of healthcare stakeholders, including patients, clinicians, bioethicists, and artificial intelligence (AI) and machine learning (ML) innovators, is to achieve health equity. Although digital health technologies such as AI and ML can enhance access, they must grapple with bias. In a recent paper published in the journal NPJ Digital Medicine, researchers introduced an extended Total Product Lifecycle (TPLC) model for AI healthcare.

Study: Navigating Bias in AI for Health Equity: An Extended TPLC Model Image credit: Suri_Studio/Shutterstock
Study: Navigating Bias in AI for Health Equity: An Extended TPLC Model. Image credit: Suri_Studio/Shutterstock

Background

The absence of avoidable disparities in health status and outcomes across different groups or regions remains a critical goal. Health inequities often stem from unequal access to diagnosis and treatment, affecting conditions such as breast cancer, depression, and diabetic eye disease. This goal of health equity unites various stakeholders, including healthcare providers, patients, ethicists, legislators, regulators, and AI creators.

The rapid growth of digital health technologies and AI-enabled medical devices presents both challenges and opportunities. AI systems can learn from data and aim to support healthcare professionals or directly assist patients. These technologies promise to improve access, care quality, and cost efficiency. However, the integration of AI into healthcare can have unintended consequences, particularly when lacking adherence to emerging evidence-based standards or when designed for broader applications than initially intended.

Bias in the healthcare process

Undesirable bias can worsen existing health inequities or create new disparities in the development and application of AI-enabled medical devices. Bias in any part of the healthcare process can disproportionately affect different groups, historically resulting in poorer outcomes for underrepresented and underserved populations. Measuring bias involves quantifying its differential impact on groups, considering factors such as race, ethnicity, age, and gender. Such measurements can help identify and address biases in AI systems.

The mitigation of AI bias begins with assessing and quantifying bias sources throughout the AI device's lifecycle. Stakeholders must determine the extent to which identified biases should be mitigated, considering the specific AI context and perceived benefit-to-burden ratios.

The AI total product lifestyle

The AI lifecycle analysis by Char et al. delineated phases from conception through deployment, paralleled by evaluation and oversight. This model facilitates ethical assessments and interdisciplinary collaboration in healthcare. Abramoff et al. extended it, linking metrics to AI phases. An alternative bias decomposition approach focuses solely on AI algorithmic bias. It neglects the pipeline phases of Char et al. and does not consider the impact of AI on care and clinical outcomes. Char et al.'s framework identifies AI system phases and associated ethical concerns, including "equity."

The FDA applied the Total Product Lifecycle (TPLC) approach to AI systems. TPLC aligns with Char et al.'s pipeline phases. Researchers expanded TPLC for ethical analysis, integrating equity considerations and bias mitigation. Ethical principles can be optimized within each TPLC phase using relevant equity metrics. The proposed TPLC model has the potential to influence health equity. Each TPLC phase has the potential to impact health equity, with varying types and degrees of bias that can be quantified and mitigated. The impact of equity is phase-independent; even if earlier biases are mitigated, the next phase can introduce new inequities. Considering equity and bias throughout the TPLC shows that upstream equity considerations can have downstream effects on health outcomes.

Different phases of TPLC

Conception Phase: Addressing bias in AI-enabled medical device conceptualization is crucial. Consider the target health conditions and care processes. Focusing on conditions affecting specific populations can enhance health equity. The device's initial use setting must balance generalizability with population-specific development, training, and validation. Disparities in healthcare access may introduce bias. Historical data may contain mislabeled or missing information, impacting different population segments. Inclusive creator teams can help mitigate biases.

Design Phase: Equity implications for device use should be considered. Beyond health conditions, factors such as operator skills, clinical workflow integration, usage time, and patient burden affect access. Ignoring ethical and clinical constraints from the conception phase can solidify bias in AI design. AI validity, transparency, and explainability are crucial for assessing equity implications. Using racially invariant priors and considering AI design's ripple effects are potential solutions.

Development Phase: Training dataset selection presents an opportunity for proactive equity inclusion. Consider relevant patient attributes in training and test datasets to optimize generalization. Bias issues may arise from historical datasets, differential access to care, and eligibility criteria constraints. Ensuring reference standards accurately reflect clinical outcomes is vital.

Validation Phase: Validation phase factors should align with the intended use and patient characteristics. Diverse clinical sites must be considered for validation studies. Historical disadvantages may affect study site selection. Metrics for operator expertise and replicability should be assessed.

Access and Monitoring Phases: These phases comprehensively assess bias effects throughout TPLC. The real-world impact of the original vision of AI on health equity can be estimated. Monitoring can reveal access barriers, and adjustments can enhance accessibility. AI-induced bias can occur due to process disparities. Metrics such as population-achieved specificity and sensitivity can identify inequities.

Monitoring the real-world impact of AI is crucial, but current frameworks may be limited. Discussion among stakeholders is essential to addressing these challenges.

Conclusion

In summary, the current study outlines the sources and impacts of AI bias on health equity. The study also suggests mitigation approaches throughout the AI's TPLC model. This model serves as a starting point for discussions involving various stakeholders, including bioethicists, AI developers, regulators, patient advocacy groups, clinicians, providers, and value-based care organizations. Analyzing equity and mitigating bias within the expanded TPLC framework will enhance understanding of the effects of bias on healthcare decisions and outcomes. Stakeholders can collaboratively identify and address biases, ultimately improving healthcare outcomes for everyone.

Journal reference:
Dr. Sampath Lonka

Written by

Dr. Sampath Lonka

Dr. Sampath Lonka is a scientific writer based in Bangalore, India, with a strong academic background in Mathematics and extensive experience in content writing. He has a Ph.D. in Mathematics from the University of Hyderabad and is deeply passionate about teaching, writing, and research. Sampath enjoys teaching Mathematics, Statistics, and AI to both undergraduate and postgraduate students. What sets him apart is his unique approach to teaching Mathematics through programming, making the subject more engaging and practical for students.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Lonka, Sampath. (2023, September 15). Navigating Bias in AI for Health Equity: An Extended TPLC Model. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20230915/Navigating-Bias-in-AI-for-Health-Equity-An-Extended-TPLC-Model.aspx.

  • MLA

    Lonka, Sampath. "Navigating Bias in AI for Health Equity: An Extended TPLC Model". AZoAi. 22 December 2024. <https://www.azoai.com/news/20230915/Navigating-Bias-in-AI-for-Health-Equity-An-Extended-TPLC-Model.aspx>.

  • Chicago

    Lonka, Sampath. "Navigating Bias in AI for Health Equity: An Extended TPLC Model". AZoAi. https://www.azoai.com/news/20230915/Navigating-Bias-in-AI-for-Health-Equity-An-Extended-TPLC-Model.aspx. (accessed December 22, 2024).

  • Harvard

    Lonka, Sampath. 2023. Navigating Bias in AI for Health Equity: An Extended TPLC Model. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20230915/Navigating-Bias-in-AI-for-Health-Equity-An-Extended-TPLC-Model.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI Model Unlocks a New Level of Image-Text Understanding