IEEE Access (Jan 2024)

Advancing Fake News Detection: Hybrid Deep Learning With FastText and Explainable AI

  • Ehtesham Hashmi,
  • Sule Yildirim Yayilgan,
  • Muhammad Mudassar Yamin,
  • Subhan Ali,
  • Mohamed Abomhara

DOI
https://doi.org/10.1109/ACCESS.2024.3381038
Journal volume & issue
Vol. 12
pp. 44462 – 44480

Abstract

Read online

The widespread propagation of misinformation on social media platforms poses a significant concern, prompting substantial endeavors within the research community to develop robust detection solutions. Individuals often place unwavering trust in social networks, often without discerning the origins and authenticity of the information disseminated through these platforms. Hence, the identification of media-rich fake news necessitates an approach that adeptly leverages multimedia elements and effectively enhances detection accuracy. The ever-changing nature of cyberspace highlights the need for measures that may effectively resist the spread of media-rich fake news while protecting the integrity of information systems. This study introduces a robust approach for fake news detection, utilizing three publicly available datasets: WELFake, FakeNewsNet, and FakeNewsPrediction. We integrated FastText word embeddings with various Machine Learning and Deep Learning methods, further refining these algorithms with regularization and hyperparameter optimization to mitigate overfitting and promote model generalization. Notably, a hybrid model combining Convolutional Neural Networks and Long Short-Term Memory, enriched with FastText embeddings, surpassed other techniques in classification performance across all datasets, registering accuracy and F1-scores of 0.99, 0.97, and 0.99, respectively. Additionally, we utilized state-of-the-art transformer-based models such as BERT, XLNet, and RoBERTa, enhancing them through hyperparameter adjustments. These transformer models, surpassing traditional RNN-based frameworks, excel in managing syntactic nuances, thus aiding in semantic interpretation. In the concluding phase, explainable AI modeling was employed using Local Interpretable Model-Agnostic Explanations, and Latent Dirichlet Allocation to gain deeper insights into the model’s decision-making process.

Keywords