Jisuanji kexue yu tansuo (Mar 2024)

Named Entity Recognition Model Based on k-best Viterbi Decoupling Knowledge Distillation

  • ZHAO Honglei, TANG Huanling, ZHANG Yu, SUN Xueyuan, LU Mingyu

DOI
https://doi.org/10.3778/j.issn.1673-9418.2211052
Journal volume & issue
Vol. 18, no. 3
pp. 780 – 794

Abstract

Read online

Knowledge distillation is a general approach to improve the performance of the named entity recognition (NER) models. However, the classical knowledge distillation loss functions are coupled, which leads to poor logit distillation. In order to decouple and effectively improve the performance of logit distillation, this paper proposes an approach, k-best Viterbi decoupling knowledge distillation (kvDKD), which combines k-best Viterbi decoding to improve the computational efficiency, effectively improving the model performance. Additionally, the NER based on deep learning is easy to introduce noise in data augmentation. Therefore, a data augmentation method combining data filtering and entity rebalancing algorithm is proposed, aiming to reduce noise introduced by the original dataset and to enhance the problem of mislabeled data, which can improve the quality of data and reduce overfitting. Based on the above method, a novel named entity recognition model NER-kvDKD (named entity recognition model based on k-best Viterbi decoupling knowledge distillation) is proposed. The comparative experimental results on the datasets of MSRA, Resume, Weibo, CLUENER and CoNLL-2003 show that the proposed method can improve the generalization ability of the model and also effectively improves the student model performance.

Keywords