The paramount aim of healthcare stakeholders, including patients, clinicians, bioethicists, and artificial intelligence (AI) and machine learning (ML) innovators, is to achieve health equity. Although digital health technologies such as AI and ML can enhance access, they must grapple with bias. In a recent paper published in the journal NPJ Digital Medicine, researchers introduced an extended Total Product Lifecycle (TPLC) model for AI healthcare.
Background
The absence of avoidable disparities in health status and outcomes across different groups or regions remains a critical goal. Health inequities often stem from unequal access to diagnosis and treatment, affecting conditions such as breast cancer, depression, and diabetic eye disease. This goal of health equity unites various stakeholders, including healthcare providers, patients, ethicists, legislators, regulators, and AI creators.
The rapid growth of digital health technologies and AI-enabled medical devices presents both challenges and opportunities. AI systems can learn from data and aim to support healthcare professionals or directly assist patients. These technologies promise to improve access, care quality, and cost efficiency. However, the integration of AI into healthcare can have unintended consequences, particularly when lacking adherence to emerging evidence-based standards or when designed for broader applications than initially intended.
Bias in the healthcare process
Undesirable bias can worsen existing health inequities or create new disparities in the development and application of AI-enabled medical devices. Bias in any part of the healthcare process can disproportionately affect different groups, historically resulting in poorer outcomes for underrepresented and underserved populations. Measuring bias involves quantifying its differential impact on groups, considering factors such as race, ethnicity, age, and gender. Such measurements can help identify and address biases in AI systems.
The mitigation of AI bias begins with assessing and quantifying bias sources throughout the AI device's lifecycle. Stakeholders must determine the extent to which identified biases should be mitigated, considering the specific AI context and perceived benefit-to-burden ratios.
The AI total product lifestyle
The AI lifecycle analysis by Char et al. delineated phases from conception through deployment, paralleled by evaluation and oversight. This model facilitates ethical assessments and interdisciplinary collaboration in healthcare. Abramoff et al. extended it, linking metrics to AI phases. An alternative bias decomposition approach focuses solely on AI algorithmic bias. It neglects the pipeline phases of Char et al. and does not consider the impact of AI on care and clinical outcomes. Char et al.'s framework identifies AI system phases and associated ethical concerns, including "equity."
The FDA applied the Total Product Lifecycle (TPLC) approach to AI systems. TPLC aligns with Char et al.'s pipeline phases. Researchers expanded TPLC for ethical analysis, integrating equity considerations and bias mitigation. Ethical principles can be optimized within each TPLC phase using relevant equity metrics. The proposed TPLC model has the potential to influence health equity. Each TPLC phase has the potential to impact health equity, with varying types and degrees of bias that can be quantified and mitigated. The impact of equity is phase-independent; even if earlier biases are mitigated, the next phase can introduce new inequities. Considering equity and bias throughout the TPLC shows that upstream equity considerations can have downstream effects on health outcomes.
Different phases of TPLC
Conception Phase: Addressing bias in AI-enabled medical device conceptualization is crucial. Consider the target health conditions and care processes. Focusing on conditions affecting specific populations can enhance health equity. The device's initial use setting must balance generalizability with population-specific development, training, and validation. Disparities in healthcare access may introduce bias. Historical data may contain mislabeled or missing information, impacting different population segments. Inclusive creator teams can help mitigate biases.
Design Phase: Equity implications for device use should be considered. Beyond health conditions, factors such as operator skills, clinical workflow integration, usage time, and patient burden affect access. Ignoring ethical and clinical constraints from the conception phase can solidify bias in AI design. AI validity, transparency, and explainability are crucial for assessing equity implications. Using racially invariant priors and considering AI design's ripple effects are potential solutions.
Development Phase: Training dataset selection presents an opportunity for proactive equity inclusion. Consider relevant patient attributes in training and test datasets to optimize generalization. Bias issues may arise from historical datasets, differential access to care, and eligibility criteria constraints. Ensuring reference standards accurately reflect clinical outcomes is vital.
Validation Phase: Validation phase factors should align with the intended use and patient characteristics. Diverse clinical sites must be considered for validation studies. Historical disadvantages may affect study site selection. Metrics for operator expertise and replicability should be assessed.
Access and Monitoring Phases: These phases comprehensively assess bias effects throughout TPLC. The real-world impact of the original vision of AI on health equity can be estimated. Monitoring can reveal access barriers, and adjustments can enhance accessibility. AI-induced bias can occur due to process disparities. Metrics such as population-achieved specificity and sensitivity can identify inequities.
Monitoring the real-world impact of AI is crucial, but current frameworks may be limited. Discussion among stakeholders is essential to addressing these challenges.
Conclusion
In summary, the current study outlines the sources and impacts of AI bias on health equity. The study also suggests mitigation approaches throughout the AI's TPLC model. This model serves as a starting point for discussions involving various stakeholders, including bioethicists, AI developers, regulators, patient advocacy groups, clinicians, providers, and value-based care organizations. Analyzing equity and mitigating bias within the expanded TPLC framework will enhance understanding of the effects of bias on healthcare decisions and outcomes. Stakeholders can collaboratively identify and address biases, ultimately improving healthcare outcomes for everyone.