Frontiers in Human Neuroscience (Aug 2021)

Classification of Complex Emotions Using EEG and Virtual Environment: Proof of Concept and Therapeutic Implication

  • Eleonora De Filippi,
  • Mara Wolter,
  • Bruno R. P. Melo,
  • Bruno R. P. Melo,
  • Carlos J. Tierra-Criollo,
  • Tiago Bortolini,
  • Gustavo Deco,
  • Gustavo Deco,
  • Gustavo Deco,
  • Gustavo Deco,
  • Jorge Moll,
  • Jorge Moll

DOI
https://doi.org/10.3389/fnhum.2021.711279
Journal volume & issue
Vol. 15

Abstract

Read online

During the last decades, neurofeedback training for emotional self-regulation has received significant attention from scientific and clinical communities. Most studies have investigated emotions using functional magnetic resonance imaging (fMRI), including the real-time application in neurofeedback training. However, the electroencephalogram (EEG) is a more suitable tool for therapeutic application. Our study aims at establishing a method to classify discrete complex emotions (e.g., tenderness and anguish) elicited through a near-immersive scenario that can be later used for EEG-neurofeedback. EEG-based affective computing studies have mainly focused on emotion classification based on dimensions, commonly using passive elicitation through single-modality stimuli. Here, we integrated both passive and active elicitation methods. We recorded electrophysiological data during emotion-evoking trials, combining emotional self-induction with a multimodal virtual environment. We extracted correlational and time-frequency features, including frontal-alpha asymmetry (FAA), using Complex Morlet Wavelet convolution. Thinking about future real-time applications, we performed within-subject classification using 1-s windows as samples and we applied trial-specific cross-validation. We opted for a traditional machine-learning classifier with low computational complexity and sufficient validation in online settings, the Support Vector Machine. Results of individual-based cross-validation using the whole feature sets showed considerable between-subject variability. The individual accuracies ranged from 59.2 to 92.9% using time-frequency/FAA and 62.4 to 92.4% using correlational features. We found that features of the temporal, occipital, and left-frontal channels were the most discriminative between the two emotions. Our results show that the suggested pipeline is suitable for individual-based classification of discrete emotions, paving the way for future personalized EEG-neurofeedback training.

Keywords