Researchers demonstrate that locally fine-tuned open-weight AI models can rival or outperform closed systems in analyzing medical reports while safeguarding sensitive patient data.
Research: Privacy-ensuring Open-weights Large Language Models Are Competitive with Closed-weights GPT-4o in Extracting Chest Radiography Findings from Free-Text Reports. Image Credit: LALAKA / Shutterstock
Artificial intelligence (AI) and above all large language models (LLMs), which also form the basis for ChatGPT, are increasingly in demand in hospitals. However, patient data must always be protected.
Researchers at the University Hospital Bonn (UKB) and the University of Bonn have now been able to show that local LLMs can help structure radiological findings in a privacy-safe manner, with all data remaining at the hospital. They compared various LLMs on public reports without data protection and on data-protected reports.
The study tested 17 open-weight models and 4 closed-weight models, including GPT-4o, GPT-4o-mini, and Mistral-Large. Commercial models that require data transfer to external servers showed no advantage over local, data protection-compliant models. The results have now been published in the journal Radiology.
Everything has to be in its place. Not only on the operating table or in the office, but also with data. Structured reports, for example, are helpful for doctors as well as for further use in research in databases. Later, such structured data can also be used to train other AI models for image-based diagnosis. In practice, however, reports are usually written in free text form, which complicates further use. This is exactly where the application of AI, more precisely LLMs, comes in.
Open and Closed Models
LLMs can be divided into two categories: The closed-weights models are the commercial, well-known AI variants that are also used in chatbots such as ChatGPT. Open-weight models, such as Meta's Llama models, are an option that can be run on internal clinic servers and can even be trained further. When applying these models, all data remain stored locally, which makes the use of open LLMs advantageous in terms of data security. "The problem with commercial, closed models is that in order to use them, you have to transfer the data to external servers, which are often located outside the EU. This not only poses a legal risk but may also compromise patient privacy due to de-identification inaccuracies," emphasizes Prof. Julian Luetkens, comm. Director of the Clinic for Diagnostic and Interventional Radiology at the UKB.
"But are all LLMs equally suitable for understanding and structuring the medical content of radiological reports? To find out which LLM is suitable for a clinic, we tested various open and closed models," explains Dr. Sebastian Nowak, first and corresponding author of the study and postdoc at the University of Bonn's Clinic for Diagnostic and Interventional Radiology at the UKB. "We were also interested in whether open LLMs can be developed effectively on site in the clinic with just a few already structured reports."
Therefore, the research team carried out an analysis of 17 open and four closed LLMs. These models included leading frameworks such as Mistral-Large, Llama-3.1–70b, and GPT-4o. All of them analyzed thousands of radiology reports in free text form. Public radiology reports in English, without data protection, were used for the analysis as well as data-protected reports from the UKB in German.
Training Makes the Difference
The results show that in the case of the reports without data protection, the closed models have no advantage over some of the open LLMs. When applied directly without training, larger open models such as Mistral-Large performed comparably to GPT-4o with macro-averaged F1 scores exceeding 92%. The use of already structured reports as training data for open LLMs led to an effective improvement in the quality of information processing, even with just a few manually prepared reports. The training also reduced the difference in accuracy between large and small LLMs.
"In a training session with over 3,500 structured reports, there was no longer any relevant difference between the largest open LLM and a language model that was 1,200 times smaller," says Nowak. For example, fine-tuning with low-rank adaptation improved performance significantly, particularly in models such as Mistral-Large and OpenBioLLM-70b. "Overall, it can be concluded that open LLMs can keep up with closed ones and have the advantage of being able to be developed locally in a data protection-safe manner."
This discovery has the potential to unlock clinical databases for comprehensive epidemiological studies and research into diagnostic AI. Additionally, structured data from radiology reports can enhance multimodal AI models, combining imaging and textual data. "Ultimately, this will benefit the patient, all while strictly observing data protection," explains Nowak. The study authors emphasized that these findings demonstrate the feasibility of creating data-safe AI tools directly within clinical infrastructures. "We want to enable other clinics to use our research directly and have therefore published the code and methods for LLM use and training under an open license."
https://github.com/ukb-rad-cfqiai/LLM_based_report_info_extraction/
This study was supported by the Open Access Publication Fund of the Rheinische Friedrich-Wilhelms-Universität Bonn and by the state of North Rhine-Westphalia (SIM-1-1, Innovative Secure Medical Campus).
Sources:
Journal reference:
- Privacy-ensuring Open-weights Large Language Models Are Competitive with Closed-weights GPT-4o in Extracting Chest Radiography Findings from Free-Text Reports, Sebastian Nowak, Benjamin Wulff, Yannik C. Layer, Maike Theis, Alexander Isaak, Babak Salam, Wolfgang Block, Daniel Kuetting, Claus C. Pieper, Julian A. Luetkens, Ulrike Attenberger, and Alois M. Sprinkart, Radiology 2025 314:1, https://pubs.rsna.org/action/showCitFormats?doi=10.1148%2Fradiol.240895