Journal of Medical Internet Research (Oct 2021)

Detecting Parkinson Disease Using a Web-Based Speech Task: Observational Study

  • Wasifur Rahman,
  • Sangwu Lee,
  • Md Saiful Islam,
  • Victor Nikhil Antony,
  • Harshil Ratnu,
  • Mohammad Rafayet Ali,
  • Abdullah Al Mamun,
  • Ellen Wagner,
  • Stella Jensen-Roberts,
  • Emma Waddell,
  • Taylor Myers,
  • Meghan Pawlik,
  • Julia Soto,
  • Madeleine Coffey,
  • Aayush Sarkar,
  • Ruth Schneider,
  • Christopher Tarolli,
  • Karlo Lizarraga,
  • Jamie Adams,
  • Max A Little,
  • E Ray Dorsey,
  • Ehsan Hoque

DOI
https://doi.org/10.2196/26305
Journal volume & issue
Vol. 23, no. 10
p. e26305

Abstract

Read online

BackgroundAccess to neurological care for Parkinson disease (PD) is a rare privilege for millions of people worldwide, especially in resource-limited countries. In 2013, there were just 1200 neurologists in India for a population of 1.3 billion people; in Africa, the average population per neurologist exceeds 3.3 million people. In contrast, 60,000 people receive a diagnosis of PD every year in the United States alone, and similar patterns of rising PD cases—fueled mostly by environmental pollution and an aging population—can be seen worldwide. The current projection of more than 12 million patients with PD worldwide by 2040 is only part of the picture given that more than 20% of patients with PD remain undiagnosed. Timely diagnosis and frequent assessment are key to ensure timely and appropriate medical intervention, thus improving the quality of life of patients with PD. ObjectiveIn this paper, we propose a web-based framework that can help anyone anywhere around the world record a short speech task and analyze the recorded data to screen for PD. MethodsWe collected data from 726 unique participants (PD: 262/726, 36.1% were women; non-PD: 464/726, 63.9% were women; average age 61 years) from all over the United States and beyond. A small portion of the data (approximately 54/726, 7.4%) was collected in a laboratory setting to compare the performance of the models trained with noisy home environment data against high-quality laboratory-environment data. The participants were instructed to utter a popular pangram containing all the letters in the English alphabet, “the quick brown fox jumps over the lazy dog.” We extracted both standard acoustic features (mel-frequency cepstral coefficients and jitter and shimmer variants) and deep learning–based embedding features from the speech data. Using these features, we trained several machine learning algorithms. We also applied model interpretation techniques such as Shapley additive explanations to ascertain the importance of each feature in determining the model’s output. ResultsWe achieved an area under the curve of 0.753 for determining the presence of self-reported PD by modeling the standard acoustic features through the XGBoost—a gradient-boosted decision tree model. Further analysis revealed that the widely used mel-frequency cepstral coefficient features and a subset of previously validated dysphonia features designed for detecting PD from a verbal phonation task (pronouncing “ahh”) influence the model’s decision the most. ConclusionsOur model performed equally well on data collected in a controlled laboratory environment and in the wild across different gender and age groups. Using this tool, we can collect data from almost anyone anywhere with an audio-enabled device and help the participants screen for PD remotely, contributing to equity and access in neurological care.