IEEE Access (Jan 2023)

Cross-Modality Learning by Exploring Modality Interactions for Emotion Reasoning

  • Thi-Dung Tran,
  • Ngoc-Huynh Ho,
  • Sudarshan Pant,
  • Hyung-Jeong Yang,
  • Soo-Hyung Kim,
  • Gueesang Lee

DOI
https://doi.org/10.1109/ACCESS.2023.3283597
Journal volume & issue
Vol. 11
pp. 56634 – 56648

Abstract

Read online

Even without hearing or seeing individuals, humans are able to determine subtle emotions from a range of indicators and surroundings. However, existing research on emotion recognition is mostly focused on recognizing the emotions of speakers across complete modalities. In real-world situations, emotion reasoning is an interesting field for inferring human emotions from a person’s surroundings when neither the face nor voice can be observed. Therefore, in this paper, we propose a novel multimodal approach for predicting emotion from missing one or more modalities based on attention mechanisms. Specifically, we employ self-attention for each unimodal representation to extract the dominant features and utilize the compounded paired-modality attention (CPMA) among sets of modalities to identify the context of the considered individual, such as the interplay of modalities, and capture people’s interactions in the video. The proposed model is trained on the Multimodal Emotion Reasoning (MEmoR) dataset, which includes multimedia inputs such as visual, audio, text, and personality. The proposed model achieves a weighted F1-score of 50.63% for the primary emotion group and 42.7% for the fine-grained one. According to the results, our proposed model outperforms the conventional approaches in terms of emotion reasoning.

Keywords