IEEE Access (Jan 2024)

Scaling Up Multi-Agent Reinforcement Learning: An Extensive Survey on Scalability Issues

  • Dingbang Liu,
  • Fenghui Ren,
  • Jun Yan,
  • Guoxin Su,
  • Wen Gu,
  • Shohei Kato

DOI
https://doi.org/10.1109/ACCESS.2024.3410318
Journal volume & issue
Vol. 12
pp. 94610 – 94631

Abstract

Read online

Multi-agent learning has made significant strides in recent years. Benefiting from deep learning, multi-agent deep reinforcement learning (MADRL) has transcended traditional limitations seen in tabular tasks, arousing tremendous research interest. However, compared to other challenges in MADRL, scalability remains underemphasized, impeding the application of MADRL in complex scenarios. Scalability stands as a foundational attribute of the multi-agent system (MAS), offering a potent approach to understand and improve collective learning among agents. It encompasses the capacity to handle the increasing state-action space which arises not only from a large number of agents but also from other factors related to agents and environment. In contrast to prior surveys, this work provides a comprehensive exposition of scalability concerns in MADRL. We first introduce foundational knowledge about deep reinforcement learning and MADRL to underscore the distinctiveness of scalability issues in this domain. Subsequently, we delve into the problems posed by scalability, examining agent complexity, environment complexity, and robustness against perturbation. We elaborate on the methods that demonstrate the evolution of scalable algorithms. To conclude this survey, we discuss challenges, identify trends, and outline possible directions for future work on scalability issues. It is our aspiration that this survey enhances the understanding of researchers in this field, providing a valuable resource for in-depth exploration.

Keywords