Complex & Intelligent Systems (Feb 2024)
Genre: generative multi-turn question answering with contrastive learning for entity–relation extraction
Abstract
Abstract Extractive approaches have been the mainstream paradigm for identifying overlapping entity–relation extraction. However, limited by their inherently methodological flaws, which hardly deal with three issues: hierarchical dependent entity–relations, implicit entity–relations, and entity normalization. Recent advances have proposed an effective solution based on generative language models, which cast entity–relation extraction as a sequence-to-sequence text generation task. Inspired by the observation that humans learn by getting to the bottom of things, we propose a novel framework, namely GenRE, Generative multi-turn question answering with contrastive learning for entity–relation extraction. Specifically, a template-based question prompt generation first is designed to answer in different turns. We then formulate entity–relation extraction as a generative question answering task based on the general language model instead of span-based machine reading comprehension. Meanwhile, the contrastive learning strategy in fine-tuning is introduced to add negative samples to mitigate the exposure bias inherent in generative models. Our extensive experiments demonstrate that GenRE performs competitively on two public datasets and a custom dataset, highlighting its superiority in entity normalization and implicit entity–relation extraction. (The code is available at https://github.com/lovelyllwang/GenRE ).
Keywords