IEEE Access (Jan 2024)

Leveraging Multi-Level Semantic Understanding in a Unified NER Model

  • Yuqian Zhao,
  • Jiuchun Ren

DOI
https://doi.org/10.1109/ACCESS.2024.3424653
Journal volume & issue
Vol. 12
pp. 184275 – 184284

Abstract

Read online

Named Entity Recognition (NER) can be divided into three subtasks: Flat, Nested, and Discontinuous. They are usually handled separately and independently, and most traditional approaches rely on representations in the form of Token and Span, which limits the flexibility of the models. Although Seq2Seq-based models have recently been proposed to handle these three NER tasks in a unified way, they usually focus too much on independent verification of entity spans while ignoring the label dependencies and detailed word-word relations in the modeling process. To this end, we propose the MLSU (Multi-level Semantic Understanding) model for unified NER tasks. The overall architecture leverages an Encoder-Decoder structure that integrates sequence and grid-level semantic blocks, enhancing the model’s ability to process complex text structures effectively. The model combines entity labels attention with pre-trained GloVe embeddings at the sequence level to optimize encoding and enhance context comprehension. Subsequently, the grid-level semantic extraction employs Conditional Layer Normalization(CLN) and Dilated CNNs. This enables MLSU to capture the interaction between text and labels effectively and enhances the ability to process complex text. We conducted experiments on six popular NER datasets and our MLSU model achieved generally good performance at or near the SOTA results, validating its effectiveness and advantages in handling various NER tasks.

Keywords