Complex & Intelligent Systems (Jul 2024)

TCohPrompt: task-coherent prompt-oriented fine-tuning for relation extraction

  • Jun Long,
  • Zhuoying Yin,
  • Chao Liu,
  • Wenti Huang

DOI
https://doi.org/10.1007/s40747-024-01563-4
Journal volume & issue
Vol. 10, no. 6
pp. 7565 – 7575

Abstract

Read online

Abstract Prompt-tuning has emerged as a promising approach for improving the performance of classification tasks by converting them into masked language modeling problems through the insertion of text templates. Despite its considerable success, applying this approach to relation extraction is challenging. Predicting the relation, often expressed as a specific word or phrase between two entities, usually requires creating mappings from these terms to an existing lexicon and introducing extra learnable parameters. This can lead to a decrease in coherence between the pre-training task and fine-tuning. To address this issue, we propose a novel method for prompt-tuning in relation extraction, aiming to enhance the coherence between fine-tuning and pre-training tasks. Specifically, we avoid the need for a suitable relation word by converting the relation into relational semantic keywords, which are representative phrases that encapsulate the essence of the relation. Moreover, we employ a composite loss function that optimizes the model at both token and relation levels. Our approach incorporates the masked language modeling (MLM) loss and the entity pair constraint loss for predicted tokens. For relation level optimization, we use both the cross-entropy loss and TransE. Extensive experimental results on four datasets demonstrate that our method significantly improves performance in relation extraction tasks. The results show an average improvement of approximately 1.6 points in F1 metrics compared to the current state-of-the-art model. Codes are released at https://github.com/12138yx/TCohPrompt .

Keywords