Frontiers in Artificial Intelligence (Jan 2025)

Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology

  • Yihao Hou,
  • Yihao Hou,
  • Christoph Bert,
  • Christoph Bert,
  • Christoph Bert,
  • Ahmed Gomaa,
  • Ahmed Gomaa,
  • Ahmed Gomaa,
  • Godehard Lahmer,
  • Godehard Lahmer,
  • Godehard Lahmer,
  • Daniel Höfler,
  • Daniel Höfler,
  • Daniel Höfler,
  • Thomas Weissmann,
  • Thomas Weissmann,
  • Thomas Weissmann,
  • Raphaela Voigt,
  • Raphaela Voigt,
  • Raphaela Voigt,
  • Philipp Schubert,
  • Philipp Schubert,
  • Philipp Schubert,
  • Charlotte Schmitter,
  • Charlotte Schmitter,
  • Charlotte Schmitter,
  • Alina Depardon,
  • Alina Depardon,
  • Alina Depardon,
  • Sabine Semrau,
  • Sabine Semrau,
  • Sabine Semrau,
  • Andreas Maier,
  • Rainer Fietkau,
  • Rainer Fietkau,
  • Rainer Fietkau,
  • Yixing Huang,
  • Yixing Huang,
  • Florian Putz,
  • Florian Putz,
  • Florian Putz

DOI
https://doi.org/10.3389/frai.2024.1493716
Journal volume & issue
Vol. 7

Abstract

Read online

IntroductionGenerating physician letters is a time-consuming task in daily clinical practice.MethodsThis study investigates local fine-tuning of large language models (LLMs), specifically LLaMA models, for physician letter generation in a privacy-preserving manner within the field of radiation oncology.ResultsOur findings demonstrate that base LLaMA models, without fine-tuning, are inadequate for effectively generating physician letters. The QLoRA algorithm provides an efficient method for local intra-institutional fine-tuning of LLMs with limited computational resources (i.e., a single 48 GB GPU workstation within the hospital). The fine-tuned LLM successfully learns radiation oncology-specific information and generates physician letters in an institution-specific style. ROUGE scores of the generated summary reports highlight the superiority of the 8B LLaMA-3 model over the 13B LLaMA-2 model. Further multidimensional physician evaluations of 10 cases reveal that, although the fine-tuned LLaMA-3 model has limited capacity to generate content beyond the provided input data, it successfully generates salutations, diagnoses and treatment histories, recommendations for further treatment, and planned schedules. Overall, clinical benefit was rated highly by the clinical experts (average score of 3.4 on a 4-point scale).DiscussionWith careful physician review and correction, automated LLM-based physician letter generation has significant practical value.

Keywords