Scientific Reports (Nov 2024)

Digital detection of Alzheimer’s disease using smiles and conversations with a chatbot

  • Haruka Takeshige-Amano,
  • Genko Oyama,
  • Mayuko Ogawa,
  • Keiko Fusegi,
  • Taiki Kambe,
  • Kenta Shiina,
  • Shin-ichi Ueno,
  • Ayami Okuzumi,
  • Taku Hatano,
  • Yumiko Motoi,
  • Ito Kawakami,
  • Maya Ando,
  • Sachiko Nakayama,
  • Yoshinori Ishida,
  • Shun Maei,
  • Xiangxun Lu,
  • Tomohisa Kobayashi,
  • Rina Wooden,
  • Susumu Ota,
  • Ken Morito,
  • Yoshitaka Ito,
  • Yoshihiro Nakajima,
  • Asako Yoritaka,
  • Tadafumi Kato,
  • Nobutaka Hattori

DOI
https://doi.org/10.1038/s41598-024-77220-0
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 10

Abstract

Read online

Abstract In super-aged societies, dementia has become a critical issue, underscoring the urgent need for tools to assess cognitive status effectively in various sectors, including financial and business settings. Facial and speech features have been tried as cost-effective biomarkers of dementia including Alzheimer’s disease (AD). We aimed to establish an easy, automatic, and extensive screening tool for AD using a chatbot and artificial intelligence. Smile images and visual and auditory data of natural conversations with a chatbot from 99 healthy controls (HCs) and 93 individuals with AD or mild cognitive impairment due to AD (PwA) were analyzed using machine learning. A subset of 8 facial and 21 sound features successfully distinguished PwA from HCs, with a high area under the receiver operating characteristic curve of 0.94 ± 0.05. Another subset of 8 facial and 20 sound features predicted the cognitive test scores, with a mean absolute error as low as 5.78 ± 0.08. These results were superior to those obtained from face or auditory data alone or from conventional image depiction tasks. Thus, by combining spontaneous sound and facial data obtained through conversations with a chatbot, the proposed model can be put to practical use in real-life scenarios.