JMIR Medical Informatics (Aug 2024)

Viability of Open Large Language Models for Clinical Documentation in German Health Care: Real-World Model Evaluation Study

  • Felix Heilmeyer,
  • Daniel Böhringer,
  • Thomas Reinhard,
  • Sebastian Arens,
  • Lisa Lyssenko,
  • Christian Haverkamp

DOI
https://doi.org/10.2196/59617
Journal volume & issue
Vol. 12
pp. e59617 – e59617

Abstract

Read online

Abstract BackgroundThe use of large language models (LLMs) as writing assistance for medical professionals is a promising approach to reduce the time required for documentation, but there may be practical, ethical, and legal challenges in many jurisdictions complicating the use of the most powerful commercial LLM solutions. ObjectiveIn this study, we assessed the feasibility of using nonproprietary LLMs of the GPT variety as writing assistance for medical professionals in an on-premise setting with restricted compute resources, generating German medical text. MethodsWe trained four 7-billion–parameter models with 3 different architectures for our task and evaluated their performance using a powerful commercial LLM, namely Anthropic’s Claude-v2, as a rater. Based on this, we selected the best-performing model and evaluated its practical usability with 2 independent human raters on real-world data. ResultsIn the automated evaluation with Claude-v2, BLOOM-CLP-German, a model trained from scratch on the German text, achieved the best results. In the manual evaluation by human experts, 95 (93.1%) of the 102 reports generated by that model were evaluated as usable as is or with only minor changes by both human raters. ConclusionsThe results show that even with restricted compute resources, it is possible to generate medical texts that are suitable for documentation in routine clinical practice. However, the target language should be considered in the model selection when processing non-English text.