Complex & Intelligent Systems (Dec 2022)

DM-DQN: Dueling Munchausen deep Q network for robot path planning

  • Yuwan Gu,
  • Zhitao Zhu,
  • Jidong Lv,
  • Lin Shi,
  • Zhenjie Hou,
  • Shoukun Xu

DOI
https://doi.org/10.1007/s40747-022-00948-7
Journal volume & issue
Vol. 9, no. 4
pp. 4287 – 4300

Abstract

Read online

Abstract In order to achieve collision-free path planning in complex environment, Munchausen deep Q-learning network (M-DQN) is applied to mobile robot to learn the best decision. On the basis of Soft-DQN, M-DQN adds the scaled log-policy to the immediate reward. The method allows agent to do more exploration. However, the M-DQN algorithm has the problem of slow convergence. A new and improved M-DQN algorithm (DM-DQN) is proposed in the paper to address the problem. First, its network structure was improved on the basis of M-DQN by decomposing the network structure into a value function and an advantage function, thus decoupling action selection and action evaluation and speeding up its convergence, giving it better generalization performance and enabling it to learn the best decision faster. Second, to address the problem of the robot’s trajectory being too close to the edge of the obstacle, a method of using an artificial potential field to set a reward function is proposed to drive the robot’s trajectory away from the vicinity of the obstacle. The result of simulation experiment shows that the method learns more efficiently and converges faster than DQN, Dueling DQN and M-DQN in both static and dynamic environments, and is able to plan collision-free paths away from obstacles.

Keywords