BioMedInformatics (Jun 2023)

Multimodal Deep Learning Methods on Image and Textual Data to Predict Radiotherapy Structure Names

  • Priyankar Bose,
  • Pratip Rana,
  • William C. Sleeman,
  • Sriram Srinivasan,
  • Rishabh Kapoor,
  • Jatinder Palta,
  • Preetam Ghosh

DOI
https://doi.org/10.3390/biomedinformatics3030034
Journal volume & issue
Vol. 3, no. 3
pp. 493 – 513

Abstract

Read online

Physicians often label anatomical structure sets in Digital Imaging and Communications in Medicine (DICOM) images with nonstandard random names. Hence, the standardization of these names for the Organs at Risk (OARs), Planning Target Volumes (PTVs), and ‘Other’ organs is a vital problem. This paper presents novel deep learning methods on structure sets by integrating multimodal data compiled from the radiotherapy centers of the US Veterans Health Administration (VHA) and Virginia Commonwealth University (VCU). These de-identified data comprise 16,290 prostate structures. Our method integrates the multimodal textual and imaging data with Convolutional Neural Network (CNN)-based deep learning approaches such as CNN, Visual Geometry Group (VGG) network, and Residual Network (ResNet) and shows improved results in prostate radiotherapy structure name standardization. Evaluation with macro-averaged F1 score shows that our model with single-modal textual data usually performs better than previous studies. The models perform well on textual data alone, while the addition of imaging data shows that deep neural networks achieve better performance using information present in other modalities. Additionally, using masked images and masked doses along with text leads to an overall performance improvement with the CNN-based architectures than using all the modalities together. Undersampling the majority class leads to further performance enhancement. The VGG network on the masked image-dose data combined with CNNs on the text data performs the best and presents the state-of-the-art in this domain.

Keywords