Complex & Intelligent Systems (May 2024)

Multimodal fake news detection through intra-modality feature aggregation and inter-modality semantic fusion

  • Peican Zhu,
  • Jiaheng Hua,
  • Keke Tang,
  • Jiwei Tian,
  • Jiwei Xu,
  • Xiaodong Cui

DOI
https://doi.org/10.1007/s40747-024-01473-5
Journal volume & issue
Vol. 10, no. 4
pp. 5851 – 5863

Abstract

Read online

Abstract The prevalence of online misinformation, termed “fake news”, has exponentially escalated in recent years. These deceptive information, often rich with multimodal content, can easily deceive individuals into spreading them via various social media platforms. This has made it a hot research topic to automatically detect multimodal fake news. Existing works made a great progress on inter-modality feature fusion or semantic interaction yet largely ignore the importance of intra-modality entities and feature aggregation. This imbalance causes them to perform erratically on data with different emphases. In the realm of authentic news, the intra-modality contents and the inter-modality relationship should be in mutually supportive relationships. Inspired by this idea, we propose an innovative approach to multimodal fake news detection (IFIS), incorporating both intra-modality feature aggregation and inter-modality semantic fusion. Specifically, the proposed model implements a entity detection module and utilizes attention mechanisms for intra-modality feature aggregation, whereas inter-modality semantic fusion is accomplished via two concurrent Co-attention blocks. The performance of IFIS is extensively tested on two datasets, namely Weibo and Twitter, and has demonstrated superior performance, surpassing various advanced methods by 0.6 The experimental results validate the capability of our proposed approach in offering the most balanced performance for multimodal fake news detection tasks.

Keywords