IEEE Access (Jan 2022)

Gated Relational Encoder-Decoder Model for Target-Oriented Opinion Word Extraction

  • Taegwan Kang,
  • Segwang Kim,
  • Hyeongu Yun,
  • Hwanhee Lee,
  • Kyomin Jung

DOI
https://doi.org/10.1109/ACCESS.2022.3228835
Journal volume & issue
Vol. 10
pp. 130507 – 130517

Abstract

Read online

Target-Oriented Opinion Word Extraction (TOWE) is a challenging information extraction task that aims to find the opinion words corresponding to given opinion targets in text. To solve TOWE, it is important to consider the surrounding words of opinion words as well as the opinion targets. Although most existing works have captured the opinion target using Deep Neural Networks (DNNs), they cannot effectively utilize the local context, i.e. relationship among surrounding words of opinion words. In this work, we propose a novel and powerful model for TOWE, Gated Relational target-aware Encoder and local context-aware Decoder (GRED), which dynamically leverages the information of the opinion target and the local context. Intuitively, the target-aware encoder catches the opinion target information, and the local context-aware decoder obtains the local context information from the relationship among surrounding words. Then, GRED employs a gate mechanism to dynamically aggregate the outputs of the encoder and the decoder. In addition, we adopt a pretrained language model Bidirectional and Auto-Regressive Transformer (BART), as the structure of GRED to improve the implicit language knowledge. Extensive experiments on four benchmark datasets show that GRED surpasses all the baseline models and achieves state-of-the-art performance. Furthermore, our in-depth analysis demonstrates that GRED properly leverages the information of the opinion target and the local context for extracting the opinion words.

Keywords