Heliyon (Oct 2023)

An artificial intelligence method for predicting postoperative urinary incontinence based on multiple anatomic parameters of MRI

  • Jiakun Li,
  • Xuemeng Fan,
  • Tong Tang,
  • Erman Wu,
  • Dongyue Wang,
  • Hui Zong,
  • Xianghong Zhou,
  • Yifan Li,
  • Chichen Zhang,
  • Yihang Zhang,
  • Rongrong Wu,
  • Cong Wu,
  • Lu Yang,
  • Bairong Shen

Journal volume & issue
Vol. 9, no. 10
p. e20337

Abstract

Read online

Background: Deep learning methods are increasingly applied in the medical field; however, their lack of interpretability remains a challenge. Captum is a tool that can be used to interpret neural network models by computing feature importance weights. Although Captum is an interpretable model, it is rarely used to study medical problems, and there is a scarcity of data regarding MRI anatomical measurements for patients with prostate cancer after undergoing Robotic-Assisted Radical Prostatectomy (RARP). Consequently, predictive models for continence that use multiple types of anatomical MRI measurements are limited. Methods: We explored the energy efficiency of deep learning models for predicting continence by analyzing MRI measurements. We analyzed and compared various statistical models and provided reference examples for the clinical application of interpretable deep-learning models. Patients who underwent RARP at our institution between July 2019 and December 2020 were included in this study. A series of clinical MRI anatomical measurements from these patients was used to discover continence features, and their impact on continence was primarily evaluated using a series of statistical methods and computational models. Results: Age and six other anatomical measurements were identified as the top seven features of continence by the proposed model UINet7 with an accuracy of 0.97, and the first four of these features were also found by primary statistical analysis. Conclusions: This study fills the gaps in the in-depth investigation of continence features after RARP due to the limitations of clinical data and applicable models. We provide a pioneering example of the application of deep-learning models to clinical problems. The interpretability analysis of deep learning models has the potential for clinical applications.

Keywords