iScience (Aug 2024)

Latent relation shared learning for endometrial cancer diagnosis with incomplete multi-modality medical images

  • Jiaqi Li,
  • Lejian Liao,
  • Meihuizi Jia,
  • Zhendong Chen,
  • Xin Liu

Journal volume & issue
Vol. 27, no. 8
p. 110509

Abstract

Read online

Summary: Magnetic resonance imaging (MRI), ultrasound (US), and contrast-enhanced ultrasound (CEUS) can provide different image data about uterus, which have been used in the preoperative assessment of endometrial cancer. In practice, not all the patients have complete multi-modality medical images due to the high cost or long examination period. Most of the existing methods need to perform data cleansing or discard samples with missing modalities, which will influence the performance of the model. In this work, we propose an incomplete multi-modality images data fusion method based on latent relation shared to overcome this limitation. The shared space contains the common latent feature representation and modality-specific latent feature representation from the complete and incomplete multi-modality data, which jointly exploits both consistent and complementary information among multiple images. The experimental results show that our method outperforms the current representative approaches in terms of classification accuracy, sensitivity, specificity, and area under curve (AUC). Furthermore, our method performs well under varying imaging missing rates.

Keywords