Heliyon (Jun 2024)

MIECF: Multi-faceted information extraction and cross-mixture fusion for multimodal aspect-based sentiment analysis

  • Yu Weng,
  • Lin Chen,
  • Sen Wang,
  • Xuming Ye,
  • Xuan Liu,
  • Zheng Liu,
  • Chaomurilige

Journal volume & issue
Vol. 10, no. 12
p. e32967

Abstract

Read online

Aspect-level sentiment analysis within multimodal contexts, focusing on the precise identification and interpretation of sentiment attitudes linked to the target aspect across diverse data modalities, remains a focal research area that perpetuates the advancement of discourse and innovation in artificial intelligence. However, most existing methods tend to focus on extracting visual features from only one facet, such as face expression, which ignores the value of information from other key facets, such as the textual information presented by the image modality, resulting in information loss. To overcome the aforementioned constraint, we put forth a novel approach designated as Multi-faceted Information Extraction and Cross-mixture Fusion (MIECF) for Multimodal Aspect-based Sentiment Analysis. Our approach captures more comprehensive visual information in the image and integrates these local and global key features from multiple facets. Local features, such as facial expressions and textual features, provide direct and rich emotional cues. By contrast, the global feature often reflects the overall emotional atmosphere and context. To enhance the visual representation, we designed a Cross-mixture Fusion method to integrate this local and global multimodal information. In particular, the method establishes semantic relationships between local and global features to eliminate ambiguity brought by single-facet information and achieve more accurate contextual understanding, providing a richer and more precise manner for sentiment analysis. The experimental findings indicate that our proposed approach achieves a leading level of performance, resulting in an Accuracy of 79.65 % on the Twitter-2015 dataset, and Macro-F1 scores of 75.90 % and 73.11 % for the Twitter-2015 and Twitter-2017 datasets, respectively.

Keywords