IEEE Access (Jan 2022)

Wavelet ELM-AE Based Data Augmentation and Deep Learning for Efficient Emotion Recognition Using EEG Recordings

  • Berna Ari,
  • Kamran Siddique,
  • Omer Faruk Alcin,
  • Muzaffer Aslan,
  • Abdulkadir Sengur,
  • Raja Majid Mehmood

DOI
https://doi.org/10.1109/ACCESS.2022.3181887
Journal volume & issue
Vol. 10
pp. 72171 – 72181

Abstract

Read online

Emotion perception is critical for behavior prediction. There are many ways to capture emotional states by observing the body and copying actions. Physiological markers such as electroencephalography (EEG) have gained popularity, as facial emotions may not always adequately convey true emotion. This study has two main aims. The first is to measure four emotion categories using deep learning architectures and EEG data. The second purpose is to increase the number of samples in the dataset. To this end, a novel data augmentation approach namely the Extreme Learning Machine Wavelet Auto Encoder (ELM-W-AE) is proposed for data augmentation. The proposed data augmentation approach is both simple and faster than the other synthetic data augmentation approaches. For deep architectures, large datasets are important for performance. For this reason, data multiplexing approaches with classical and synthetic methods have become popular recently. The proposed synthetic data augmentation is the ELM-W-AE because of its efficiency and detail reproduction. The ELM-AE structure uses wavelet activation functions such as Gaussian, groove gap waveguide (GGW), Mexican, Meyer, Morlet, and Shannon. Deep convolutional architectures classify EEG signals as images. EEG waves are scalograms using Continuous Wavelet Transform (CWT). The ResNet18 architecture recognizes emotions. The proposed technique uses GAMEEMO data collected during gameplay. Each of these states is represented in the GAMEEMO data collection. The visual data set created from the signal was divided into two groups 70% training and 30% testing. ResNet18 has been fine-tuned with augmented photos, training images only. It achieved 99.6% classification accuracy in tests. The proposed method is compared with the other approaches on the same dataset, and an approximately 22% performance improvement is achieved.

Keywords