IEEE Access (Jan 2021)

A Deep Learning Model Based on BERT and Sentence Transformer for Semantic Keyphrase Extraction on Big Social Data

  • R. Devika,
  • Subramaniyaswamy Vairavasundaram,
  • C. Sakthi Jay Mahenthar,
  • Vijayakumar Varadarajan,
  • Ketan Kotecha

DOI
https://doi.org/10.1109/ACCESS.2021.3133651
Journal volume & issue
Vol. 9
pp. 165252 – 165261

Abstract

Read online

In the evolution of the Internet, social media platform like Twitter has permitted the public user to share information such as famous current affairs, events, opinions, news, and experiences. Extracting and analyzing keyphrases in Twitter content is an essential and challenging task. Keyphrases can become precise the main contribution of Twitter content as well as it is a vital issue in vast Natural Language Processing (NLP) application. Extracting keyphrases is not only a time-consuming process but also requires much effort. The current works are on graph-based models or machine learning models. The performance of these models relies on feature extraction or statistical measures. In recent year, the application of deep learning algorithms to Twitter data have more insight due to automatic feature extraction can improve the performance of several tasks. This work aims to extract the keyphrase from Big social data using a sentence transformer with Bidirectional Encoder Representation Transformers (BERT) deep learning model. This BERT representation retains semantic and syntactic connectivity between tweets, enhancing performance in every NLP task on large data sets. It can automatically extract the most typical phrases in the Tweets. The proposed Semkey-BERT model shows that BERT with sentence transformer accuracy of 86% is higher than the other existing models.

Keywords