In an article recently published in the journal Npj Digital Medicine, the authors proposed nine ethical principles to govern the generative artificial intelligence (AI) applications in healthcare by adopting and expanding existing ethical principles from the military to healthcare.
Background
Generative AI has received significant attention in healthcare for several applications, including evidence-based medicine summarization and clinical documentation, and making high-quality healthcare more accessible to everyone. The emerging technology, coupled with the growing availability of health data, such as medical images, electrocardiograms, and electronic health records, and more accessible computing power, can revolutionize healthcare.
Several organizations are increasingly articulating their ethical principles and outlining the responsibilities associated with the application of AI to their operations with the growing usage of AI technology in several fields, including healthcare and the military. Organizations such as the American Medical Association (AMA), the World Health Organization (WHO), the United States (U.S.) Department of Defense (DoD) and the North Atlantic Treaty Organization (NATO) have published several ethical principles for AI.
Ethical principles such as national security and defense and mission effectiveness are emphasized in the military by organizations such as DoD and NATO, while ethical principles such as privacy, autonomy, and empathy are given importance in healthcare by organizations such as WHO. Accountability, governability, equity, traceability, reliability, and lawfulness are the ethical principles emphasized in both fields.
The GREAT PLEA
In this paper, the authors explored ethical principles from the military perspective and proposed the Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy (GREAT PLEA) ethical principles for generative AI in healthcare.
These ethical principles must be prioritized while using and implementing generative AI in practical healthcare settings to effectively address the ethical concerns of generative AI in healthcare. The GREAT PLEA ethical principles can protect clinicians and patients from unforeseen consequences.
Moreover, these principles can be followed to continuously evaluate generative AI for bias, errors, and other concerns of caregivers/patients about their relationship with AI. The GREAT PLEA principles can be enforced through cooperation with lawmakers, establishing standards for users and developers, and partnering with recognized governing bodies such as AMA or WHO.
Governability: AI system governability standards established by organizations such as NATO and DoD emphasize that humans must retain the ability to prevent and identify unintended consequences while AI systems perform their intended functions. Specifically, human intervention to deactivate or disengage the deployed AI system must be ensured during any unintended behavior. These standards can be adopted for generative AI in healthcare when several hospitals use the same generative systems.
Reliability: Output variations and hallucination are the major drawbacks of existing generative AI models, which undermine their ability to generate reliable outputs and affect trust between generative AI systems and physicians. The DoD’s principle for reliability can be adopted to establish AI application use cases and monitor them during deployment and development to resolve system deterioration and failures/errors to prevent accidents.
Equity: Generative AI models pre-trained on massive datasets possess higher data bias risks, which can exacerbate the already existing inequity in healthcare for low health literacy, socioeconomically disadvantaged, under-represented, or marginalized groups. Thus, the models must incorporate the unique social situations of these groups into future AI models to ensure equity.
Accountability: The Responsibility and Accountability principle outlined by NATO states that AI applications will be developed mindfully and human responsibility will be integrated to establish human accountability for actions taken by/with the application to ensure human involvement and accountability with AI in healthcare. Such assurance is critical when a clinician uses generative AI to treat patients.
Autonomy: The significant progress of generative AI in the last few years has increased the need to protect human autonomy while using generative AI in healthcare. Human autonomy implies patients receiving care based on their values and preferences and clinicians delivering treatment to patients independently without being interfered with by the generative AI system. Autonomy in decision-making must be patient-focused to prevent poor clinical outcomes and adverse events.
Traceability and Privacy: In healthcare, the growing use of generative AI has increased the importance of proper documentation to ensure that all end users are educated properly on the limitations and capabilities of the systems. Privacy is essential in most medical and military applications owing to their confidential nature. Generative AI systems used in healthcare must be Health Insurance Portability and Accountability Act (HIPAA) compliant for data disclosures and secured to prevent data breaches.
Lawfulness and Empathy: Lawfulness refers to the adherence to international and national laws, including human rights law, and can be adopted for the implementation of generative AI in healthcare to protect clinicians, AI developers, and patients from any unintended consequences due to legal challenges faced by generative AI systems. A framework for human involvement in healthcare generative AI applications can be created to ensure patients receive helpful and empathetic care.
Journal reference:
- Oniani, D., Hilsman, J., Peng, Y., Poropatich, R. K., Pamplin, J. C., Legault, G. L., Wang, Y. (2023). Adopting and expanding ethical principles for generative artificial intelligence from the military to healthcare. Npj Digital Medicine, 6(1), 1-10. https://doi.org/10.1038/s41746-023-00965-x, https://www.nature.com/articles/s41746-023-00965-x