Nature Communications (Oct 2024)

LLM-driven multimodal target volume contouring in radiation oncology

  • Yujin Oh,
  • Sangjoon Park,
  • Hwa Kyung Byun,
  • Yeona Cho,
  • Ik Jae Lee,
  • Jin Sung Kim,
  • Jong Chul Ye

DOI
https://doi.org/10.1038/s41467-024-53387-y
Journal volume & issue
Vol. 15, no. 1
pp. 1 – 14

Abstract

Read online

Abstract Target volume contouring for radiation therapy is considered significantly more challenging than the normal organ segmentation tasks as it necessitates the utilization of both image and text-based clinical information. Inspired by the recent advancement of large language models (LLMs) that can facilitate the integration of the textural information and images, here we present an LLM-driven multimodal artificial intelligence (AI), namely LLMSeg, that utilizes the clinical information and is applicable to the challenging task of 3-dimensional context-aware target volume delineation for radiation oncology. We validate our proposed LLMSeg within the context of breast cancer radiotherapy using external validation and data-insufficient environments, which attributes highly conducive to real-world applications. We demonstrate that the proposed multimodal LLMSeg exhibits markedly improved performance compared to conventional unimodal AI models, particularly exhibiting robust generalization performance and data-efficiency.