In an article recently published in the journal Humanities and Social Sciences Communications, researchers demonstrated the need for robust ethics and governance frameworks to guide the development and use of artificial intelligence (AI) in healthcare.
Background
In healthcare, regulating the application of AI and mitigating its challenges at the international, regional, and national levels is a crucial and complex topic. AI systems can potentially improve clinical trials and research, enhance health outcomes, facilitate early diagnosis and detection for more effective treatment, and empower both patients and healthcare employees who depend on health monitoring in developing countries or remote areas. However, AI possesses social, legal, and ethical risks like environmental impact, patient safety, algorithmic bias, and data privacy.
In this paper, the regulatory, ethical, and technical challenges of using AI in healthcare were identified and evaluated. The key challenges confronted by states in regulating AI use in healthcare were also identified, specifically the legal complexities and voids for better transparency and adequate regulation. Additionally, the author made a number of recommendations to mitigate risks, secure health data, and more efficiently regulate AI use in healthcare through the implementation of harmonized standards and global cooperation under the World Health Organization (WHO), in line with the organization's constitutional mandate to regulate both public and digital health.
Challenges of AI in healthcare
Performance, discrimination, errors and misdiagnosis, accountability and transparency, explainability, adoption and implementation, security, governance and regulation, ability to regulate third-party access to personal health data, bias, access and affordability to AI in developing countries, interoperability between various operating systems like Android and Apple, health equity, and data collection, storage, privacy, quality, accuracy, and availability are the major challenges of using AI in healthcare.
Ethical, regulatory, and technical challenges must be addressed by developing concrete regulations like representativity, interoperability, conditions for accessing health data, and the implementation of quality standards. Additionally, compliance with crucial regulations such as the Health Insurance Portability and Accountability Act (HIPAA), General Data Protection Regulation (GDPR), Data Act, or AI Act is also necessary. Self-regulation must be encouraged to develop public confidence in AI-driven applications.
Ensuring personal health data privacy
Various measures can be implemented to ensure the security and privacy of personal health data. All stakeholders, including healthcare providers, companies, and regulatory authorities, are responsible for ensuring data confidentiality and patient privacy.
Security awareness training, encryption, two-factor authentication, role-based access implementation, data access restrictions, using a VPN to secure data, conducting routine risk assessments, and educating healthcare personnel are the potential safeguards and measures to ensure effective data protection.
Solutions to regulate AI systems
Although the regulation of AI in healthcare is a complex issue, potential solutions exist to regulate AI systems adequately. Establishing legally binding standards and rules under the WHO, promoting accountability and transparency, strengthening regulatory oversight, fostering global cooperation, encouraging industry self-regulation, establishing an ‘AI culture’ with all important stakeholders, and ethically using personal health data are the solutions that can adequately regulate the use of AI in healthcare.
For instance, the European Commission categorized AI systems based on various levels of risk requiring less or more regulation. Under the AI Act, AI systems with unacceptable levels of risk must be banned, while AI systems with limited risk must comply with minimal transparency requirements to enable users to make informed decisions.
Similarly, the WHO has listed crucial regulatory considerations on AI for health at the multilateral level. Promoting safety, protecting autonomy, ensuring transparency, fostering responsibility, promoting sustainable AI, and ensuring equity are the WHO’s guiding principles for regulating AI in healthcare.
WHO has also advocated for better cooperation and coordination between all stakeholders and states to ensure greater medical and clinical benefits for patients. These WHO-developed principles can assist stakeholders in developing responsible and ethical AI systems based on five distinct themes, including compliance with guiding principles, engaging in dialogue and collaboration, balancing responsibility and innovation, building organizational awareness and culture, and using proper methods and tools.
AI ethics and governance
The existing applicable legal framework to global public health does not adequately protect privacy and personal data, which necessitates a new paradigm for reshaping global health and shifting towards a legal framework dedicated to AI in healthcare.
This new paradigm involves the implementation of legally binding rules by WHO members in the field of AI. The International Health Regulations (IHR) (2005) can be utilized by States Parties to enhance the degree of response to threats like privacy. WHO members can depend on IHR and EU regulations, such as the Data Act, AI Act, and GDPR, to negotiate new legally binding rules.
To summarize, AI must provide better health systems and facilitate access to healthcare according to the United Nations Sustainable Development Goals (UN SDGs), specifically in the least developed countries. European regulations can provide established standards and reliable legal frameworks, which can be implemented by every stakeholder for responsible and ethical AI systems. Additionally, WHO Members must cooperate actively and elaborate legally binding rules and new guidelines under the IHR.