PLoS ONE (Jan 2022)

Multimodal false information detection method based on Text-CNN and SE module.

  • Yi Liang,
  • Turdi Tohti,
  • Askar Hamdulla

DOI
https://doi.org/10.1371/journal.pone.0277463
Journal volume & issue
Vol. 17, no. 11
p. e0277463

Abstract

Read online

False information detection can detect false information in social media and reduce its negative impact on society. With the development of multimedia, the multimodal content contained in false information is increasing, so it is important to use multimodal features to detect false information. This paper mainly uses information from two modalities, text and image. The features extracted by the backbone network are not further processed in the previous work, and the problems of noise and information loss in the process of fusing multimodal features are ignored. This paper proposes a false information detection method based on Text-CNN and SE modules. We use Text-CNN to process the text and image features extracted by BERT and Swin-transformer to enhance the quality of the features. In addition, we use the modified SE module to fuse text and image features and reduce the noise in the fusion process. Meanwhile, we draw on the idea of residual networks to reduce information loss in the fusion process by concatenating the original features with the fused features. Our model improves accuracy by 6.5% and 2.0% on the Weibo dataset and Twitter dataset compared to the attention based multimodal factorized bilinear pooling. The comparative experimental results show that the proposed model can improve the accuracy of false information detection. The results of ablation experiments further demonstrate the effectiveness of each module in our model.