Jisuanji kexue (Oct 2021)

Multimodal Representation Learning for Alzheimer's Disease Diagnosis

  • FAN Lian-xi, LIU Yan-bei, WANG Wen, GENG Lei, WU Jun, ZHANG Fang, XIAO Zhi-tao

DOI
https://doi.org/10.11896/jsjkx.200900178
Journal volume & issue
Vol. 48, no. 10
pp. 107 – 113

Abstract

Read online

Alzheimer's disease (AD) is a complex neurodegenerative disease involving a variety of pathogenic factors.So far,the cause of Alzheimer's disease is not clear,the course of the disease is irreversible,and there is no cure.Its early diagnosis and treatment have always been the focus of attention.The neuroimaging data of subjects has an important auxiliary role in the diagnosis of this disease,and the combination of multimodal data can further improve the diagnostic effect.At present,the multimodal data representation learning of the disease has gradually become an emerging research field,which has attracted wide attention from researchers.An autoencoder based multimodal representation learning method for Alzheimer's disease diagnosis is proposed.Firstly,the multimodal data are initially fused to obtain the primary common representation.Then,it is input into the autoencoder network to learn the final common representation in latent space.Finally,the common representation in latent space is classified to obtain the disease result.The proposed method,which achieves the best diagnostic results compared with comparison algorithms,has an accuracy of 88.9% in the classification of AD and healthy subjects in the ADNI dataset.Extensive experimental results verify its effectiveness.

Keywords