IEEE Access (Jan 2023)
Enhancing Conversational Model With Deep Reinforcement Learning and Adversarial Learning
Abstract
This paper develops a Chatbot conversational model that is aimed to achieve two goals: 1) utilizing contextual information to generate accurate and relevant responses, and 2) implementing strategies to make conversations human-like. We propose a supervised learning approach for model development and make use of a dataset consisting of multi-turn conversations for model training. In particular, we first develop a module based on deep reinforcement learning to maximize the utilization of contextual information serving as insurance for accurate response generation. Then, we incorporate the response generation process into an adversarial learning framework so as to make the generated response more human-like. Using these two phases in combination eventually results in a unified model that generates semantically appropriate responses that are also expressed naturally as human-generated in the conversation. We conducted various experiments and obtained a significant improvement compared to the baseline and other related studies.
Keywords