IEEE Access (Jan 2024)

Enhancing Zero-Shot Crypto Sentiment With Fine-Tuned Language Model and Prompt Engineering

  • Rahman S. M. Wahidur,
  • Ishmam Tashdeed,
  • Manjit Kaur,
  • Heung-No Lee

DOI
https://doi.org/10.1109/ACCESS.2024.3350638
Journal volume & issue
Vol. 12
pp. 10146 – 10159

Abstract

Read online

Blockchain technology has revolutionized the financial landscape, witnessing widespread adoption of cryptocurrencies due to their decentralized and transparent nature. As sentiments expressed on social media platforms wield substantial influence over cryptocurrency market dynamics, sentiment analysis has emerged as a crucial tool for gauging public opinion and predicting market trends. This paper explores fine-tuning techniques for large language models to enhance sentiment analysis performance. Experimental results demonstrate a significant average zero-shot performance gain of 40% on unseen tasks after fine-tuning, highlighting its potential. Additionally, the impact of instruction-based fine-tuning on models of varying scales is examined, revealing that larger models benefit from instruction tuning, achieving the highest average accuracy score of 75.16%. In contrast, smaller-scale models may experience reduced generalization due to complete model capacity utilization. To gain deeper insight into instruction effectiveness, the paper presents experimental investigations under different instruction tuning setups. Results show the model achieves an average accuracy score of 72.38% for short and simple instructions, outperforming long and complex instructions by over 12%. Finally, the paper explores the relationship between fine-tuning corpus size and model performance, identifying an optimal corpus size of 6,000 data points for the highest performance across different models. Microsoft’s MiniLM, a distilled version of BERT, excels in efficient data use and performance optimization, while Google’s FLAN-T5 demonstrates consistent and reliable performance across diverse datasets.

Keywords