Sensors (Oct 2021)

Silent EEG-Speech Recognition Using Convolutional and Recurrent Neural Network with 85% Accuracy of 9 Words Classification

  • Darya Vorontsova,
  • Ivan Menshikov,
  • Aleksandr Zubov,
  • Kirill Orlov,
  • Peter Rikunov,
  • Ekaterina Zvereva,
  • Lev Flitman,
  • Anton Lanikin,
  • Anna Sokolova,
  • Sergey Markov,
  • Alexandra Bernadotte

DOI
https://doi.org/10.3390/s21206744
Journal volume & issue
Vol. 21, no. 20
p. 6744

Abstract

Read online

In this work, we focus on silent speech recognition in electroencephalography (EEG) data of healthy individuals to advance brain–computer interface (BCI) development to include people with neurodegeneration and movement and communication difficulties in society. Our dataset was recorded from 270 healthy subjects during silent speech of eight different Russia words (commands): ‘forward’, ‘backward’, ‘up’, ‘down’, ‘help’, ‘take’, ‘stop’, and ‘release’, and one pseudoword. We began by demonstrating that silent word distributions can be very close statistically and that there are words describing directed movements that share similar patterns of brain activity. However, after training one individual, we achieved 85% accuracy performing 9 words (including pseudoword) classification and 88% accuracy on binary classification on average. We show that a smaller dataset collected on one participant allows for building a more accurate classifier for a given subject than a larger dataset collected on a group of people. At the same time, we show that the learning outcomes on a limited sample of EEG-data are transferable to the general population. Thus, we demonstrate the possibility of using selected command-words to create an EEG-based input device for people on whom the neural network classifier has not been trained, which is particularly important for people with disabilities.

Keywords