IEEE Access (Jan 2024)

Enhanced Sentiment Intensity Regression Through LoRA Fine-Tuning on Llama 3

  • Diefan Lin,
  • Yi Wen,
  • Weishi Wang,
  • Yan Su

DOI
https://doi.org/10.1109/ACCESS.2024.3438353
Journal volume & issue
Vol. 12
pp. 108072 – 108087

Abstract

Read online

Sentiment analysis and emotion detection are critical research areas in natural language processing (NLP), offering benefits to numerous downstream tasks. Despite the widespread application of pre-trained models and large language models (LLMs) in sentiment analysis, most previous works have focused on sentiment polarity or emotion classification, neglecting the finer-grained task of sentiment intensity regression, which prevents the precise capture of sentiment intensity and hindering model performance in complex scenarios and diverse applications. To address this issue, we enhance the Roberta model with an efficient additive attention mechanism and an adaptive weighted Huber loss function, notably improving its performance in sentiment intensity regression. Based on the SemEval 2017 and 2018 datasets, we employ prompt engineering to construct fine-tuned datasets, which are further enriched with outputs from the enhanced Roberta model. We then fine-tune the Llama 3 model using Low-Rank Adaptation (LoRA) within the Unsloth framework. Experimental results demonstrate that our enhanced RoBERTa model significantly outperforms baseline models. Furthermore, the enriched and LoRA fine-tuned Llama 3-8B model outperforms other LLMs with similar parameter scales. Our method improves MAE by 0.015 and MSE by 0.0054 on the SemEval 2018 dataset, achieving a Pearson correlation coefficient of 0.8441. On the SemEval 2017 dataset, it improves MAE by 0.0416 and MSE by 0.043, with a Pearson correlation coefficient increased to 0.8268, which demonstrates the superior predictive power and robustness of our approach.

Keywords