The Future of Large Language Models in Healthcare: Balancing Potential Benefits and Risks

In an article in the press with the journal npj Digital Medicine, the authors discussed the need for regulatory oversight of large language models (LLMs)/generative artificial intelligence (AI) in healthcare.

Study: The Future of Large Language Models in Healthcare: Balancing Potential Benefits and Risks. Image credit: Ole.CNX /Shutterstock
Study: The Future of Large Language Models in Healthcare: Balancing Potential Benefits and Risks. Image credit: Ole.CNX /Shutterstock

Background

The rapid advancements in AI technology have enabled the development of several sophisticated LLMs, such as Bard and GPT-4. These LLMs can be implemented in healthcare settings to summarize research papers, obtain insurance pre-authorization, and facilitate clinical documentation.

LLMs can also improve research equity and scientific writing, enhance personalized learning in medical education, streamline the healthcare workflow, work as a chatbot to answer patient queries and address their concerns, and assist physicians in diagnosing conditions based on laboratory results and medical records.

Although LLMs offer transformative potential in healthcare, they must be implemented after considering the implications on public health and patient outcomes, as these AI-based tools are trained differently from regulated AI-based medical technologies. Additionally, the multipotency of LLMs has significantly increased the existing concerns about AI.

Thus, the regulation of LLMs in healthcare and medicine without affecting their transformative potential is crucial to secure patient privacy, ensure safety, pre-empt bias and unfairness, and maintain ethical standards. Regulatory oversight can allow patients to utilize LLMs without compromising their privacy/data and provide assurance to medical professionals about this technology.

In this paper, the authors reviewed the potential benefits and risks of using LLMs in healthcare settings and discussed the necessity of regulating LLMs to ensure public trust in this technology and mitigate potential risks.

The need for regulatory oversight of LLMs

LLMs significantly differ from the existing deep learning techniques in potential impact, capabilities, and scale. For instance, LLMs are trained using massive datasets and billions of parameters, leading to exceptional complexity and necessitating regulatory oversight that considers unintended consequences and interpretability and fairness challenges.

Similarly, an adaptable regulatory oversight that can address different industry-specific concerns is required for LLMs as they possess versatile capabilities spanning multiple domains, such as education, finance, and healthcare.

LLMs can adjust their responses based on evolving contexts and user inputs in real time. This dynamic behavior necessitates regulatory oversight that includes continuous evaluation and monitoring mechanisms to ensure adherence to ethical guidelines and responsible usage.

Moreover, extensive adoption of LLMs can fundamentally transform multiple societal aspects. Thus, regulatory oversight is required to address the economic, social, and ethical implications due to LLM adoption. Regulatory oversight must also establish effective frameworks to secure sensitive data and prevent misuse or unauthorized access of LLMs to address data security/privacy concerns, as these models rely on extensive training data.

Pre-LLM regulatory oversight of AI

The Food and Drug Administration (FDA) of the United States (US) is leading the discussions on regulatory oversight and established regulations about several emerging technologies, including AI-based medical tools and three-dimensional (3D)-printed medications. 

The FDA started regulating software as a medical device (SaMD) with the rising adoption of digital health technology. SaMD refers to software solutions performing medical functions and is utilized in monitoring, treating, diagnosing, or preventing different conditions/diseases.

The FDA established a regulatory framework that specifically addresses machine learning (ML) and AI technologies based on the SaMD approach. The proposed framework emphasized the significance of continuously improving and monitoring these technologies, real-world performance monitoring, ML/AI algorithm updates, and transparency.

Although FDA has been able to regulate AI, the regulation of two technological issues has not been achieved until now, including the regulation of adaptive algorithms that adjust their behavior/parameters based on their specific task performance/input data and the autodidactic function in deep learning.

Benefits and risks of using LLMs in healthcare

LLMs can be employed in several applications in the healthcare sector, which is the key advantage of this technology. Medical professionals can use LLMs to create discharge summaries, generate clinical notes, suggest treatment options, and design treatment plans. They can also use LLMs for diagnostic assistance and radiology interpretation.

Similarly, patients can utilize LLMs to analyze laboratory results and disease descriptions, interpret physician notes, obtain personalized health recommendations and predictions, assess symptoms, and analyze wearable data. Additionally, they can use LLMs for rehabilitation guidance and medication adherence.

However, the application of LLMs can also lead to several risks for patients. For instance, LLMs can generate outputs related to recommended tests, treatments, or diagnoses that are not based on factual information/input data, posing a severe risk to patients. Similarly, biases in training data used to train the LLMs can affect healthcare equity, patient outcomes, and clinical decision-making, which can delay proper care or even deteriorate patient conditions.

Regulatory challenges due to the advent of LLMs

Several regulatory challenges have emerged with the rising adoption of LLMs in healthcare. These include ensuring patient data privacy, protecting intellectual property, preventing medical malpractice liability, maintaining standardization and quality control, ensuring transparency and interpretability, eliminating bias to ensure fairness in decisions, over-reliance on AI models, regulating and defining data ownership, and ensuring continuous validation and monitoring.

Conclusion

To summarize, a proactive approach to regulation is required to harness the transformative potential of LLMs while minimizing risks and preserving the trust of healthcare providers and patients in this technology. Regulators can take several steps to facilitate extensive deployment of LLMs in the field of medicine.

These include the creation of a separate regulatory category for LLMs that is distinct from other AI-based medical technologies, offering regulatory guidance to healthcare organizations and companies about LLM deployment in their existing services and products, and establishing a regulatory framework that covers text-, video- and sound-based iterations.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2023, July 11). The Future of Large Language Models in Healthcare: Balancing Potential Benefits and Risks. AZoAi. Retrieved on November 21, 2024 from https://www.azoai.com/news/20230711/The-Future-of-Large-Language-Models-in-Healthcare-Balancing-Potential-Benefits-and-Risks.aspx.

  • MLA

    Dam, Samudrapom. "The Future of Large Language Models in Healthcare: Balancing Potential Benefits and Risks". AZoAi. 21 November 2024. <https://www.azoai.com/news/20230711/The-Future-of-Large-Language-Models-in-Healthcare-Balancing-Potential-Benefits-and-Risks.aspx>.

  • Chicago

    Dam, Samudrapom. "The Future of Large Language Models in Healthcare: Balancing Potential Benefits and Risks". AZoAi. https://www.azoai.com/news/20230711/The-Future-of-Large-Language-Models-in-Healthcare-Balancing-Potential-Benefits-and-Risks.aspx. (accessed November 21, 2024).

  • Harvard

    Dam, Samudrapom. 2023. The Future of Large Language Models in Healthcare: Balancing Potential Benefits and Risks. AZoAi, viewed 21 November 2024, https://www.azoai.com/news/20230711/The-Future-of-Large-Language-Models-in-Healthcare-Balancing-Potential-Benefits-and-Risks.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New Framework Boosts Trustworthiness of AI Retrieval-Augmented Systems