IEEE Access (Jan 2020)

Policy Reuse for Dialog Management Using Action-Relation Probability

  • Tung T. Nguyen,
  • Koichiro Yoshino,
  • Sakriani Sakti,
  • Satoshi Nakamura

DOI
https://doi.org/10.1109/ACCESS.2020.3017780
Journal volume & issue
Vol. 8
pp. 159639 – 159649

Abstract

Read online

We study the problem of policy adaptation for reinforcement-learning-based dialog management. Policy adaptation is a commonly used technique to alleviate the problem of data sparsity when training a goal-oriented dialog system for a new task (the target task) by using knowledge when learning policies in an existing task. The methods used by current works in dialog policy adaptation need much time and effort for adapting because they use reinforcement learning algorithms to train a new policy for the target task from scratch. In this paper, we show that a dialog policy can be learned without training by reinforcement learning in the target task. In contrast to existing works, our proposed method learns the relation in the form of probability distribution between the action sets of the source and the target tasks. Thus, we can immediately derive a policy for the target task, which significantly reduces the adaptation time. Our experiments show that the proposed method learns a new policy for the target task much more quickly. In addition, the learned policy achieves higher performance than policies created by fine-tuning when the amount of available data on the target task is limited.

Keywords