Journal of Universal Computer Science (Oct 2021)

Adapting Pre-trained Language Models to Rumor Detection on Twitter

  • Hamda Slimi,
  • Ibrahim Bounhas,
  • Yahya Slimani

DOI
https://doi.org/10.3897/jucs.65918
Journal volume & issue
Vol. 27, no. 10
pp. 1128 – 1148

Abstract

Read online Read online Read online

Fake news has invaded social media platforms where false information is being propagated with malicious intent at a fast pace. These circumstances required the development of solutions to monitor and detect rumor in a timely manner. In this paper, we propose an approach that seeks to detect emerging and unseen rumors on Twitter by adapting a pre-trained language model to the task of rumor detection, namely RoBERTa. A comparison against content-based characteristics has shown the capability of the model to surpass handcrafted features. Experimental results show that our approach outperforms state of the art ones in all metrics and that the fine tuning of RoBERTa led to richer word embeddings that consistently and significantly enhance the precision of rumor recognition.

Keywords