Mathematics (Aug 2022)

Noise-Regularized Advantage Value for Multi-Agent Reinforcement Learning

  • Siying Wang,
  • Wenyu Chen,
  • Jian Hu,
  • Siyue Hu,
  • Liwei Huang

DOI
https://doi.org/10.3390/math10152728
Journal volume & issue
Vol. 10, no. 15
p. 2728

Abstract

Read online

Leveraging global state information to enhance policy optimization is a common approach in multi-agent reinforcement learning (MARL). Even with the supplement of state information, the agents still suffer from insufficient exploration in the training stage. Moreover, training with batch-sampled examples from the replay buffer will induce the policy overfitting problem, i.e., multi-agent proximal policy optimization (MAPPO) may not perform as good as independent PPO (IPPO) even with additional information in the centralized critic. In this paper, we propose a novel noise-injection method to regularize the policies of agents and mitigate the overfitting issue. We analyze the cause of policy overfitting in actor–critic MARL, and design two specific patterns of noise injection applied to the advantage function with random Gaussian noise to stabilize the training and enhance the performance. The experimental results on the Matrix Game and StarCraft II show the higher training efficiency and superior performance of our method, and the ablation studies indicate our method will keep higher entropy of agents’ policies during training, which leads to more exploration.

Keywords