IEEE Access (Jan 2022)
A Non-Autoregressive Neural Machine Translation Model With Iterative Length Update of Target Sentence
Abstract
The non-autoregressive decoders in neural machine translation are paid increasing attention due to their faster decoding than autoregressive decoders. However, their apparent problem is a low performance which is mainly originated from wrong prediction about the target sentence length. To attack this problem, this paper proposes a novel machine translation model with a new non-autoregressive decoder named Iterative and Length-Adjustive Non-Autoregressive Decoder (ILAND). This decoder adopts a masked language model to avoid generation of low-confident tokens and changes the length of a target sentence iteratively to an optimal length. To complete these goals, ILAND consists of three complementary sub-modules of a token masker, a length adjuster, and a token generator. The token masker and the token generator take charge of the masked language model, and the length adjuster optimizes the target sentence length. The sequence-to-sequence training of the translation model is also introduced. In this training, the length adjuster and the token generator are jointly trained since they share a similar structure. The effectiveness of the translation model is proven by showing empirically that the model outperforms other models with various non-autoregressive decoders. The thorough analysis suggests that the performance gain of the translation model comes from target sentence length adaptation and the joint learning. In addition, ILAND is also shown to be faster than other iterative non-autoregressive decoders while it is still robust against the multi-modality problem.
Keywords