Scientific Data (Aug 2024)

A Multimodal Dataset for Mixed Emotion Recognition

  • Pei Yang,
  • Niqi Liu,
  • Xinge Liu,
  • Yezhi Shu,
  • Wenqi Ji,
  • Ziqi Ren,
  • Jenny Sheng,
  • Minjing Yu,
  • Ran Yi,
  • Dan Zhang,
  • Yong-Jin Liu

DOI
https://doi.org/10.1038/s41597-024-03676-4
Journal volume & issue
Vol. 11, no. 1
pp. 1 – 14

Abstract

Read online

Abstract Mixed emotions have attracted increasing interest recently, but existing datasets rarely focus on mixed emotion recognition from multimodal signals, hindering the affective computing of mixed emotions. On this basis, we present a multimodal dataset with four kinds of signals recorded while watching mixed and non-mixed emotion videos. To ensure effective emotion induction, we first implemented a rule-based video filtering step to select the videos that could elicit stronger positive, negative, and mixed emotions. Then, an experiment with 80 participants was conducted, in which the data of EEG, GSR, PPG, and frontal face videos were recorded while they watched the selected video clips. We also recorded the subjective emotional rating on PANAS, VAD, and amusement-disgust dimensions. In total, the dataset consists of multimodal signal data and self-assessment data from 73 participants. We also present technical validations for emotion induction and mixed emotion classification from physiological signals and face videos. The average accuracy of the 3-class classification (i.e., positive, negative, and mixed) can reach 80.96% when using SVM and features from all modalities, which indicates the possibility of identifying mixed emotional states.