Communications Engineering (Jul 2024)

Achieving multi-modal brain disease diagnosis performance using only single-modal images through generative AI

  • Kaicong Sun,
  • Yuanwang Zhang,
  • Jiameng Liu,
  • Ling Yu,
  • Yan Zhou,
  • Fang Xie,
  • Qihao Guo,
  • Han Zhang,
  • Qian Wang,
  • Dinggang Shen

DOI
https://doi.org/10.1038/s44172-024-00245-w
Journal volume & issue
Vol. 3, no. 1
pp. 1 – 13

Abstract

Read online

Abstract Brain disease diagnosis using multiple imaging modalities has shown superior performance compared to using single modality, yet multi-modal data is not easily available in clinical routine due to cost or radiation risk. Here we propose a synthesis-empowered uncertainty-aware classification framework for brain disease diagnosis. To synthesize disease-relevant features effectively, a two-stage framework is proposed including multi-modal feature representation learning and representation transfer based on hierarchical similarity matching. Besides, the synthesized and acquired modality features are integrated based on evidential learning, which provides diagnosis decision and also diagnosis uncertainty. Our framework is extensively evaluated on five datasets containing 3758 subjects for three brain diseases including Alzheimer’s disease (AD), subcortical vascular mild cognitive impairment (MCI), and O[6]-methylguanine-DNA methyltransferase promoter methylation status for glioblastoma, achieving 0.950 and 0.806 in area under the ROC curve on ADNI dataset for discriminating AD patients from normal controls and progressive MCI from static MCI, respectively. Our framework not only achieves quasi-multimodal performance although using single-modal input, but also provides reliable diagnosis uncertainty.