Big Data and Cognitive Computing (Jan 2024)

Mixture of Attention Variants for Modal Fusion in Multi-Modal Sentiment Analysis

  • Chao He,
  • Xinghua Zhang,
  • Dongqing Song,
  • Yingshan Shen,
  • Chengjie Mao,
  • Huosheng Wen,
  • Dingju Zhu ,
  • Lihua Cai

DOI
https://doi.org/10.3390/bdcc8020014
Journal volume & issue
Vol. 8, no. 2
p. 14

Abstract

Read online

With the popularization of better network access and the penetration of personal smartphones in today’s world, the explosion of multi-modal data, particularly opinionated video messages, has created urgent demands and immense opportunities for Multi-Modal Sentiment Analysis (MSA). Deep learning with the attention mechanism has served as the foundation technique for most state-of-the-art MSA models due to its ability to learn complex inter- and intra-relationships among different modalities embedded in video messages, both temporally and spatially. However, modal fusion is still a major challenge due to the vast feature space created by the interactions among different data modalities. To address the modal fusion challenge, we propose an MSA algorithm based on deep learning and the attention mechanism, namely the Mixture of Attention Variants for Modal Fusion (MAVMF). The MAVMF algorithm includes a two-stage process: in stage one, self-attention is applied to effectively extract image and text features, and the dependency relationships in the context of video discourse are captured by a bidirectional gated recurrent neural module; in stage two, four multi-modal attention variants are leveraged to learn the emotional contributions of important features from different modalities. Our proposed approach is end-to-end and has been shown to achieve a superior performance to the state-of-the-art algorithms when tested with two largest public datasets, CMU-MOSI and CMU-MOSEI.

Keywords