IEEE Access (Jan 2021)

A Cooperative Multi-Agent Reinforcement Learning Method Based on Coordination Degree

  • Haoyan Cui,
  • Zhen Zhang

DOI
https://doi.org/10.1109/ACCESS.2021.3110255
Journal volume & issue
Vol. 9
pp. 123805 – 123814

Abstract

Read online

Multi-agent reinforcement learning (MARL) has become a prevalent method for solving cooperative problems owing to its tractable implementation and task distribution. The goal of the MARL algorithms for fully cooperative scenarios is to obtain the optimal joint strategy that maximizes the expected common cumulative reward for all agents. However, to date, the analysis of MARL dynamics has focused on repeated games with few agents and actions. To this end, we propose a cooperative MARL algorithm based on the coordination degree (CMARL-CD) and analyze its dynamics in more general cases in which repeated games with more agents and actions are considered. Theoretical analysis shows that if the component action of every optimal joint action is unique, all optimal joint actions are asymptotically stable critical points. The CMARL-CD algorithm realizes coordination among agents without the need to estimate the global Q-value function. Each agent estimates the coordination degree of its own action, which represents the potential of being the optimal action. The efficacy of the CMARL-CD algorithm is studied through repeated games and stochastic games.

Keywords