IEEE Access (Jan 2023)

Improving Emotion Recognition Systems by Exploiting the Spatial Information of EEG Sensors

  • Guido Gagliardi,
  • Antonio Luca Alfeo,
  • Vincenzo Catrambone,
  • Diego Candia-Rivera,
  • Mario G. C. A. Cimino,
  • Gaetano Valenza

DOI
https://doi.org/10.1109/ACCESS.2023.3268233
Journal volume & issue
Vol. 11
pp. 39544 – 39554

Abstract

Read online

Electroencephalography (EEG)-based emotion recognition is gaining increasing importance due to its potential applications in various scientific fields, ranging from psychophysiology to neuromarketing. A number of approaches have been proposed that use machine learning (ML) technology to achieve high recognition performance, which relies on engineering features from brain activity dynamics. Since ML performance can be improved by utilizing 2D feature representation that exploits the spatial relationships among the features, here we propose a novel input representation that involves re-arranging EEG features as an image that reflects the top view of the subject’s scalp. This approach enables emotion recognition through image-based ML methods such as pre-trained deep neural networks or “trained-from-scratch” convolutional neural networks. We have employed both of these techniques in our study to demonstrate the effectiveness of our proposed input representation. We also compare the recognition performance of these methods against state-of-the-art tabular data analysis approaches, which do not utilize the spatial relationships between the sensors. We test our proposed approach using two publicly available benchmark datasets for EEG-based emotion recognition tasks, namely DEAP and MAHNOB-HCI. Our results show that the “trained-from-scratch” convolutional neural network outperforms the best approaches in the literature, achieving 97.8% and 98.3% accuracy in valence and arousal classification on MAHNOB-HCI, and 91% and 90.4% on DEAP, respectively.

Keywords