IEEE Access (Jan 2023)

Explainable Artificial Intelligence (EXAI) Models for Early Prediction of Parkinson’s Disease Based on Spiral and Wave Drawings

  • S. Saravanan,
  • Kannan Ramkumar,
  • K. Narasimhan,
  • Subramaniyaswamy Vairavasundaram,
  • Ketan Kotecha,
  • Ajith Abraham

DOI
https://doi.org/10.1109/ACCESS.2023.3291406
Journal volume & issue
Vol. 11
pp. 68366 – 68378

Abstract

Read online

Parkinson’s disease (PD) is a rapidly growing neurodegenerative disorder that primarily affects the elderly population. Until now, there has been no antidote for PD. However, diagnosing Parkinson’s disease in its early stages is difficult. Early treatment will help people with Parkinson’s disease improve their quality of life. The primary goal of this work is to increase the early diagnostic accuracy of Parkinson’s disease using deep learning models and to make the models more transparent and trustworthy. It proved challenging to comprehend the methods by which the classifiers made predictions about Parkinson’s disease. It would be valuable if the outcomes generated by these classifiers could be clarified in a reliable and trustworthy manner. Explainable Artificial Intelligence (EXAI) focuses on enhancing clinical health practises and bringing transparency to predictive analysis, both of which are critical in the healthcare arena. We proposed a new hybrid deep transfer learning model to distinguish PD patients from healthy individuals. The proposed architecture combines the advantages of both VGG19 Net and Google Net. This study also shows the experimental outcomes of various pre-trained models such as Alex Net, DenseNet-201, VGG-19 Net, Squeeze Net1.1, and ResNet-50. The VGG19-INC model predicts PD with an accuracy of 98.45%, which is greater than other state-of-the-art approaches, demonstrating the proposed work’s superiority and robustness. To demystify the VGG19-INC model, explainable AI approaches such as LIME are used to identify the specific parts of the spiral and wave drawings that contribute most to the model’s prediction. These methods provide local interpretation, making it easier to understand how the model arrives at its conclusions.

Keywords