Frontiers in Human Neuroscience (Apr 2021)

Decoding Multiple Sound-Categories in the Auditory Cortex by Neural Networks: An fNIRS Study

  • So-Hyeon Yoo,
  • Hendrik Santosa,
  • Chang-Seok Kim,
  • Keum-Shik Hong

DOI
https://doi.org/10.3389/fnhum.2021.636191
Journal volume & issue
Vol. 15

Abstract

Read online

This study aims to decode the hemodynamic responses (HRs) evoked by multiple sound-categories using functional near-infrared spectroscopy (fNIRS). The six different sounds were given as stimuli (English, non-English, annoying, nature, music, and gunshot). The oxy-hemoglobin (HbO) concentration changes are measured in both hemispheres of the auditory cortex while 18 healthy subjects listen to 10-s blocks of six sound-categories. Long short-term memory (LSTM) networks were used as a classifier. The classification accuracy was 20.38 ± 4.63% with six class classification. Though LSTM networks’ performance was a little higher than chance levels, it is noteworthy that we could classify the data subject-wise without feature selections.

Keywords