In an article in the press with the journal npj Digital Medicine, the authors discussed the need for regulatory oversight of large language models (LLMs)/generative artificial intelligence (AI) in healthcare.
Background
The rapid advancements in AI technology have enabled the development of several sophisticated LLMs, such as Bard and GPT-4. These LLMs can be implemented in healthcare settings to summarize research papers, obtain insurance pre-authorization, and facilitate clinical documentation.
LLMs can also improve research equity and scientific writing, enhance personalized learning in medical education, streamline the healthcare workflow, work as a chatbot to answer patient queries and address their concerns, and assist physicians in diagnosing conditions based on laboratory results and medical records.
Although LLMs offer transformative potential in healthcare, they must be implemented after considering the implications on public health and patient outcomes, as these AI-based tools are trained differently from regulated AI-based medical technologies. Additionally, the multipotency of LLMs has significantly increased the existing concerns about AI.
Thus, the regulation of LLMs in healthcare and medicine without affecting their transformative potential is crucial to secure patient privacy, ensure safety, pre-empt bias and unfairness, and maintain ethical standards. Regulatory oversight can allow patients to utilize LLMs without compromising their privacy/data and provide assurance to medical professionals about this technology.
In this paper, the authors reviewed the potential benefits and risks of using LLMs in healthcare settings and discussed the necessity of regulating LLMs to ensure public trust in this technology and mitigate potential risks.
The need for regulatory oversight of LLMs
LLMs significantly differ from the existing deep learning techniques in potential impact, capabilities, and scale. For instance, LLMs are trained using massive datasets and billions of parameters, leading to exceptional complexity and necessitating regulatory oversight that considers unintended consequences and interpretability and fairness challenges.
Similarly, an adaptable regulatory oversight that can address different industry-specific concerns is required for LLMs as they possess versatile capabilities spanning multiple domains, such as education, finance, and healthcare.
LLMs can adjust their responses based on evolving contexts and user inputs in real time. This dynamic behavior necessitates regulatory oversight that includes continuous evaluation and monitoring mechanisms to ensure adherence to ethical guidelines and responsible usage.
Moreover, extensive adoption of LLMs can fundamentally transform multiple societal aspects. Thus, regulatory oversight is required to address the economic, social, and ethical implications due to LLM adoption. Regulatory oversight must also establish effective frameworks to secure sensitive data and prevent misuse or unauthorized access of LLMs to address data security/privacy concerns, as these models rely on extensive training data.
Pre-LLM regulatory oversight of AI
The Food and Drug Administration (FDA) of the United States (US) is leading the discussions on regulatory oversight and established regulations about several emerging technologies, including AI-based medical tools and three-dimensional (3D)-printed medications.
The FDA started regulating software as a medical device (SaMD) with the rising adoption of digital health technology. SaMD refers to software solutions performing medical functions and is utilized in monitoring, treating, diagnosing, or preventing different conditions/diseases.
The FDA established a regulatory framework that specifically addresses machine learning (ML) and AI technologies based on the SaMD approach. The proposed framework emphasized the significance of continuously improving and monitoring these technologies, real-world performance monitoring, ML/AI algorithm updates, and transparency.
Although FDA has been able to regulate AI, the regulation of two technological issues has not been achieved until now, including the regulation of adaptive algorithms that adjust their behavior/parameters based on their specific task performance/input data and the autodidactic function in deep learning.
Benefits and risks of using LLMs in healthcare
LLMs can be employed in several applications in the healthcare sector, which is the key advantage of this technology. Medical professionals can use LLMs to create discharge summaries, generate clinical notes, suggest treatment options, and design treatment plans. They can also use LLMs for diagnostic assistance and radiology interpretation.
Similarly, patients can utilize LLMs to analyze laboratory results and disease descriptions, interpret physician notes, obtain personalized health recommendations and predictions, assess symptoms, and analyze wearable data. Additionally, they can use LLMs for rehabilitation guidance and medication adherence.
However, the application of LLMs can also lead to several risks for patients. For instance, LLMs can generate outputs related to recommended tests, treatments, or diagnoses that are not based on factual information/input data, posing a severe risk to patients. Similarly, biases in training data used to train the LLMs can affect healthcare equity, patient outcomes, and clinical decision-making, which can delay proper care or even deteriorate patient conditions.
Regulatory challenges due to the advent of LLMs
Several regulatory challenges have emerged with the rising adoption of LLMs in healthcare. These include ensuring patient data privacy, protecting intellectual property, preventing medical malpractice liability, maintaining standardization and quality control, ensuring transparency and interpretability, eliminating bias to ensure fairness in decisions, over-reliance on AI models, regulating and defining data ownership, and ensuring continuous validation and monitoring.
Conclusion
To summarize, a proactive approach to regulation is required to harness the transformative potential of LLMs while minimizing risks and preserving the trust of healthcare providers and patients in this technology. Regulators can take several steps to facilitate extensive deployment of LLMs in the field of medicine.
These include the creation of a separate regulatory category for LLMs that is distinct from other AI-based medical technologies, offering regulatory guidance to healthcare organizations and companies about LLM deployment in their existing services and products, and establishing a regulatory framework that covers text-, video- and sound-based iterations.