IEEE Access (Jan 2024)

Enhancing the Classification Accuracy of EEG-Informed Inner Speech Decoder Using Multi-Wavelet Feature and Support Vector Machine

  • Mokhles M. Abdulghani,
  • Wilbur L. Walters,
  • H. Khalid Abed

DOI
https://doi.org/10.1109/ACCESS.2024.3474854
Journal volume & issue
Vol. 12
pp. 147929 – 147941

Abstract

Read online

Speech involves the synchronization of the brain and the oral articulators. Inner speech, also known as imagined speech or covert speech, refers to thinking in the form of sound without intentional movement of the lips, tongue, or hands. Decoding human thoughts is a powerful technique that can help individuals who have lost the ability to speak. This paper introduces a high-performance brain wave decoder based on inner speech, using a novel feature extraction method. The approach combined Support Vector Machine (SVM) and multi-wavelet feature extraction techniques to decode two EEG-based inner speech datasets (Data 1 and Data 2) into internally spoken words. The proposed approach achieved an overall classification accuracy of 68.20%, precision of 68.22%, recall of 68.20%, and F1-score of 68.21% for Data 1, and accuracy of 97.5%, precision of 97.73%, recall of 97.50%, and F1-score of 97.61% for Data 2. Additionally, the Area Under the Curve of the Receiver Operating Characteristic (AUC-ROC) demonstrated the validity of the proposed approach for classifying inner speech commands by achieving a macro-average of 78.76% and 99.32% for Data 1 and Data 2, respectively. The EEG-based inner speech classification method proposed in this research has the potential to improve communication for patients with speech disorders, mutism, cognitive development issues, executive function problems, and mental disorder.

Keywords