Complex & Intelligent Systems (Jan 2025)

Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation

  • Qinlong Fan,
  • Jicang Lu,
  • Yepeng Sun,
  • Qiankun Pi,
  • Shouxin Shang

DOI
https://doi.org/10.1007/s40747-024-01767-8
Journal volume & issue
Vol. 11, no. 2
pp. 1 – 12

Abstract

Read online

Abstract In the real world, stance detection tasks often involve assessing the stance or attitude of a given text toward new, unseen targets, a task known as zero-shot stance detection. However, zero-shot stance detection often suffers from issues such as sparse data annotation and inherent task complexity, which can lead to lower performance. To address these challenges, we propose combining fine-tuning of Large Language Models (LLMs) with knowledge augmentation for zero-shot stance detection. Specifically, we leverage stance detection and related tasks from debate corpora to perform multi-task fine-tuning of LLMs. This approach aims to learn and transfer the capability of zero-shot stance detection and reasoning analysis from relevant data. Additionally, we enhance the model’s semantic understanding of the given text and targets by retrieving relevant knowledge from external knowledge bases as context, alleviating the lack of relevant contextual knowledge. Compared to ChatGPT, our model achieves a significant improvement in the average F1 score, with an increase of 15.74% on the SemEval 2016 Task 6 A and 3.55% on the P-Stance dataset. Our model outperforms current state-of-the-art models on these two datasets, demonstrating the superiority of multi-task fine-tuning with debate data and knowledge augmentation.

Keywords