PLoS ONE (Jan 2017)

Multiagent cooperation and competition with deep reinforcement learning.

  • Ardi Tampuu,
  • Tambet Matiisen,
  • Dorian Kodelja,
  • Ilya Kuzovkin,
  • Kristjan Korjus,
  • Juhan Aru,
  • Jaan Aru,
  • Raul Vicente

DOI
https://doi.org/10.1371/journal.pone.0172395
Journal volume & issue
Vol. 12, no. 4
p. e0172395

Abstract

Read online

Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.