Tongxin xuebao (Apr 2024)

Research on self-training neural machine translation based on monolingual priority sampling

  • ZHANG Xiaoyan,
  • PANG Lei,
  • DU Xiaofeng,
  • LU Tianbo,
  • XIA Yamei

Journal volume & issue
Vol. 45
pp. 65 – 72

Abstract

Read online

To enhance the performance of neural machine translation (NMT) and ameliorate the detrimental impact of high uncertainty in monolingual data during the self-training process, a self-training NMT model based on priority sampling was proposed. Initially, syntactic dependency trees were constructed and the importance of monolingual tokenization was assessed using grammar dependency analysis. Subsequently, a monolingual lexicon was built, and priority was defined based on the importance of monolingual tokenization and uncertainty. Finally, monolingual priorities were computed, and sampling was carried out based on these priorities, consequently generating a synthetic parallel dataset for training the student NMT model. Experimental results on a large-scale subset of the WMT English to German dataset demonstrate that the proposed model effectively enhances NMT translation performance and mitigates the impact of high uncertainty on the model.

Keywords