Jisuanji kexue (Jan 2023)

Utilizing Heterogeneous Graph Neural Network to Extract Emotion-Cause Pairs Effectively

  • PU Jinyao, BU Lingmei, LU Yongmei, YE Ziming, CHEN Li, YU Zhonghua

DOI
https://doi.org/10.11896/jsjkx.211100265
Journal volume & issue
Vol. 50, no. 1
pp. 205 – 212

Abstract

Read online

As an emerging task in text sentiment analysis,the automatic extraction of emotion-cause pairs aims to identify emotion expression from the raw texts without any annotation in the unit of clauses,and identify the causes for the corresponding emotions to form emotion-cause pairs.The crucial point of this task is focused on how to effectively capture the relationship between emotions and causes and among different emotion-cause pairs.To overcome the shortcomings of existing researches in capturing these associations,such as too coarse granularity and unable to effectively distinguish the mutual influence of causal relations between different pairs,this paper proposes an emotion-cause pair extraction method based on a heterogeneous graph neural network.Initially,we construct a heterogeneous graph with clauses and clause pairs as vertices,in which there are different types of edges between clauses and clause pairs and between different clause pairs to capture various fine-grained associations.Then using the heterogeneous graph neural network algorithm with attention mechanism to iteratively update the vertex embeddings of clauses and clause pairs.Finally,the updated embeddings is input to the binary classifier,and the classifier judges whether the corresponding pair has an emotion-cause relationship.To evaluate the effectiveness of the proposed model,we conduct a series of experiments on a benchmark dataset of the emotion-cause pair extraction task.The results demonstrate that the method based on the heterogeneous graph neural network proposed in this paper has a stable effect improvement,and the F1 value is 0.85% higher than the state-of-art baselines.When the bottom encoder(for obtaining the initial embeddings of clauses and clause pairs) is replaced by BERT,the F1 value can reach 73.12%,and our model also outperforms the state-of-art algorithm.

Keywords