Heliyon (Aug 2024)
GBERT: A hybrid deep learning model based on GPT-BERT for fake news detection
Abstract
The digital era has expanded social exposure with easy internet access for mobile users, allowing for global communication. Now, people can get to know what is going on around the globe with just a click; however, this has also resulted in the issue of fake news. Fake news is content that pretends to be true but is actually false and is disseminated to defraud. Fake news poses a threat to harmony, politics, the economy, and public opinion. As a result, bogus news detection has become an emerging research domain to identify a given piece of text as genuine or fraudulent. In this paper, a new framework called Generative Bidirectional Encoder Representations from Transformers (GBERT) is proposed that leverages a combination of Generative pre-trained transformer (GPT) and Bidirectional Encoder Representations from Transformers (BERT) and addresses the fake news classification problem. This framework combines the best features of both cutting-edge techniques—BERT's deep contextual understanding and the generative capabilities of GPT—to create a comprehensive representation of a given text. Both GPT and BERT are fine-tuned on two real-world benchmark corpora and have attained 95.30 % accuracy, 95.13 % precision, 97.35 % sensitivity, and a 96.23 % F1 score. The statistical test results indicate the effectiveness of the fine-tuned framework for fake news detection and suggest that it can be a promising approach for eradicating this global issue of fake news in the digital landscape.