Informatics in Medicine Unlocked (Jan 2022)

Robust multi-modal prostate cancer classification via feature autoencoder and dual attention

  • Bochong Li,
  • Ryo Oka, M.D,
  • Ping Xuan,
  • Yuichiro Yoshimura, PhD,
  • Toshiya Nakaguchi

Journal volume & issue
Vol. 30
p. 100923

Abstract

Read online

Prostate cancer is the second leading cause of cancer death in men. At present, the methods for classifying early cancer grades on MRI images are mainly focused on single image modality and with low robustness. Therefore, this paper focuses on exploring the method of classifying cancer grades on multi-modality MRI images and maintaining robustness. In this paper, we propose a novel and effective multi-modal convolutional neural network for discriminating prostate cancer clinical severity grade, i.e., Robust Multi-modal Feature Autoencoder Attention net (RMANet); this model greatly improves the accuracy and robustness of the model. T2-weighted and Diffusion-weighted imaging are used in this article. The model consists of two branches, one of them is to learn the overall features of two MRI modalities by building a ten-layer CNN network with two input shared weights, and the other branch uses auto-encoder structure with classical U-net as the backbone to learn specific features of each modality and to improve the robustness of the classification model. In the branch of learning overall features of each modality, the novel dual attention mechanism is added to this branch, through which the attention mechanism can better direct the learning focus of the model to the cancerous regions. Experiments were conducted on the ProstateX dataset and augmented with hospital data. By comparing with other baseline methods, multi-modal input methods, and State-of-the-Art (SOTA) methods, the AUC values obtained by the proposed model (reaching 0.84) in this paper after the test set are higher than other classical models and most recent methods, and the sensitivity values (reaching 0.84) are higher than the recent method.

Keywords