International Journal of Advanced Robotic Systems (Apr 2020)

Active collaboration in relative observation for multi-agent visual simultaneous localization and mapping based on Deep Q Network

  • Zhaoyi Pei,
  • Songhao Piao,
  • Meixiang Quan,
  • Muhammad Zuhair Qadir,
  • Guo Li

DOI
https://doi.org/10.1177/1729881420920216
Journal volume & issue
Vol. 17

Abstract

Read online

This article proposes a unique active relative localization mechanism for multi-agent simultaneous localization and mapping, in which an agent to be observed is considered as a task, and the others who want to assist that agent will perform that task by relative observation. A task allocation algorithm based on deep reinforcement learning is proposed for this mechanism. Each agent can choose whether to localize other agents or to continue independent simultaneous localization and mapping on its own initiative. By this way, the process of each agent simultaneous localization and mapping will be interacted by the collaboration. Firstly, a unique observation function which models the whole multi-agent system is obtained based on ORBSLAM. Secondly, a novel type of Deep Q Network called multi-agent systemDeep Q Network (MAS-DQN) is deployed to learn correspondence between Q value and state–action pair, abstract representation of agents in multi-agent system is learned in the process of collaboration among agents. Finally, each agent must act with a certain degree of freedom according to MAS-DQN. The simulation results of comparative experiments prove that this mechanism improves the efficiency of cooperation in the process of multi-agent simultaneous localization and mapping.