IEEE Access (Jan 2024)

A Novel Image Casting and Fusion for Identifying Individuals at Risk of Alzheimer’s Disease Using MRI and PET Imaging

  • Adediji A. Fakoya,
  • Simon Parkinson

DOI
https://doi.org/10.1109/ACCESS.2024.3412850
Journal volume & issue
Vol. 12
pp. 134101 – 134114

Abstract

Read online

Alzheimer’s disease (AD) is a neurodegenerative condition characterised by permanent cognitive decline. The financial burden associated with providing care for those affected by this ailment has escalated to a significant magnitude, amounting to billions of dollars. Moreover, projections indicate that the prevalence of this sickness is anticipated to surge by more than 200% throughout the next 15-year period. Numerous researchers have dedicated their efforts to the field of computer-aided diagnosis (CAD), whereby computational methods are used to discern and diagnose various medical conditions. Nevertheless, the identification of persons afflicted with the condition does not mitigate the adverse consequences. It would be advantageous to identify individuals who may be susceptible to the onset of AD at an early stage. This might enhance the efficacy of pharmaceutical interventions and the implementation of preventive measures for AD. The challenge is to determine the best method for predicting the conversion from mild cognitive impairment (MCI) to AD while accounting for two factors: (a) how to represent the three-dimensional (3D) magnetic resonance imaging (MRI) and positron emission tomography (PET) images, since processing these images takes a long time, a lot of computational power, and the accuracy may not be very good; and (b) how best to combine the images while maintaining their individual image information. To solve the problem, this research examined two imaging modalities, namely MRI and PET, and convolutional neural networks (CNNs). The modalities were used to investigate the creation of a method for selecting slices from 3D scans for eventual conversion into 2D images and to combine the imagery obtained from the slice selection of the modalities into one used in the CNNs. The purpose of developing the integration was to tackle the issue of insufficient data and retain the information from the two modalities while improving the accuracy in performance simultaneously. The image classification process demonstrated a much quicker performance, completing the test in just 0.285% of the time needed by a 3D-CNN for the identical task and achieving an accuracy of 94.0%.

Keywords