npj Precision Oncology (Oct 2024)

Adaptive segmentation-to-survival learning for survival prediction from multi-modality medical images

  • Mingyuan Meng,
  • Bingxin Gu,
  • Michael Fulham,
  • Shaoli Song,
  • Dagan Feng,
  • Lei Bi,
  • Jinman Kim

DOI
https://doi.org/10.1038/s41698-024-00690-y
Journal volume & issue
Vol. 8, no. 1
pp. 1 – 11

Abstract

Read online

Abstract Early survival prediction is vital for the clinical management of cancer patients, as tumors can be better controlled with personalized treatment planning. Traditional survival prediction methods are based on radiomics feature engineering and/or clinical indicators (e.g., cancer staging). Recently, survival prediction models with advances in deep learning techniques have achieved state-of-the-art performance in end-to-end survival prediction by exploiting deep features derived from medical images. However, existing models are heavily reliant on the prognostic information within primary tumors and cannot effectively leverage out-of-tumor prognostic information characterizing local tumor metastasis and adjacent tissue invasion. Also, existing models are sub-optimal in leveraging multi-modality medical images as they rely on empirically designed fusion strategies to integrate multi-modality information, where the fusion strategies are pre-defined based on domain-specific human prior knowledge and inherently limited in adaptability. Here, we present an Adaptive Multi-modality Segmentation-to-Survival model (AdaMSS) for survival prediction from multi-modality medical images. The AdaMSS can self-adapt its fusion strategy based on training data and also can adapt its focus regions to capture the prognostic information outside the primary tumors. Extensive experiments with two large cancer datasets (1380 patients from nine medical centers) show that our AdaMSS surmounts the state-of-the-art survival prediction performance (C-index: 0.804 and 0.757), demonstrating the potential to facilitate personalized treatment planning.