IEEE Access (Jan 2023)

Performance Improvement on Traditional Chinese Task-Oriented Dialogue Systems With Reinforcement Learning and Regularized Dropout Technique

  • Jeng-Shin Sheu,
  • Siang-Ru Wu,
  • Wen-Hung Wu

DOI
https://doi.org/10.1109/ACCESS.2023.3248796
Journal volume & issue
Vol. 11
pp. 19849 – 19862

Abstract

Read online

The development of conversational voice assistant applications has been in full swing around the world. This paper aims to develop traditional Chinese multi-domain task-oriented dialogue (TOD) systems. It is typically implemented using pipeline approach, where submodules are optimized independently, resulting in inconsistencies with each other. Instead, this paper implements end-to-end multi-domain TOD models using pre-trained deep neural networks (DNNs). This allows us to integrate all the submodules into one single DNN model to solve the inconsistencies. Data shortages are common in conversational natural language processing (NLP) tasks using DNN models. In this regard, dropout regularization has been widely used to improve overfitting caused by insufficient training dataset. However, the randomness it introduces leads to non-negligible discrepancies between training and inference. On the other hand, pre-trained language models have successfully provided effective regularization for NLP tasks. An inherent disadvantage is that fine-tuning the pre-trained language model suffers from exposure bias and loss-evaluation mismatch. To this end, we propose a reinforcement learning (RL) approach to address both issues. Furthermore, we adopt a method called regularized dropout (R-Drop) to improve the inconsistency in dropout layers of DNNs. Experimental results show that both our proposed RL approach and the R-Drop technique can significantly improve the joint target accuracy (JGA) score and combined score of traditional Chinese TOD system in tasks of dialogue state tracking (DST) and end-to-end sentence prediction, respectively.

Keywords