IEEE Access (Jan 2024)

Assessing Accuracy: A Study of Lexicon and Rule-Based Packages in R and Python for Sentiment Analysis

  • Amin Mahmoudi,
  • Dariusz Jemielniak,
  • Leon Ciechanowski

DOI
https://doi.org/10.1109/ACCESS.2024.3353692
Journal volume & issue
Vol. 12
pp. 20169 – 20180

Abstract

Read online

Sentiment analysis has become a focal point of interdisciplinary research, prompting the use of diverse methodologies and the continual emergence of programming language packages. Notably, Python and R have introduced comprehensive packages in this realm. In this study, we analyze established packages in these languages, focusing on accuracy while also considering time complexity. Across experiments conducted on seven distinct datasets, a crucial revelation surfaces: the accuracy of these packages significantly varies depending on the dataset used. Among these, the ‘sentimentr’ package consistently performs well across diverse datasets. Generally, Python libraries showcase superior processing speed. However, it’s essential to note that while these packages adeptly classify sentences as positive or negative, capturing sentiment intensity proves challenging. Our findings highlight a prevalent trend of overfitting, where these packages excel on familiar datasets but struggle when faced with unfamiliar ones.

Keywords