Frontiers in Computer Science (May 2023)

Reinforcement learning for communication load balancing: approaches and challenges

  • Di Wu,
  • Jimmy Li,
  • Amal Ferini,
  • Yi Tian Xu,
  • Michael Jenkin,
  • Seowoo Jang,
  • Xue Liu,
  • Gregory Dudek

DOI
https://doi.org/10.3389/fcomp.2023.1156064
Journal volume & issue
Vol. 5

Abstract

Read online

The amount of cellular communication network traffic has increased dramatically in recent years, and this increase has led to a demand for enhanced network performance. Communication load balancing aims to balance the load across available network resources and thus improve the quality of service for network users. Most existing load balancing algorithms are manually designed and tuned rule-based methods where near-optimality is almost impossible to achieve. Furthermore, rule-based methods are difficult to adapt to quickly changing traffic patterns in real-world environments. Reinforcement learning (RL) algorithms, especially deep reinforcement learning algorithms, have achieved impressive successes in many application domains and offer the potential of good adaptabiity to dynamic changes in network load patterns. This survey presents a systematic overview of RL-based communication load-balancing methods and discusses related challenges and opportunities. We first provide an introduction to the load balancing problem and to RL from fundamental concepts to advanced models. Then, we review RL approaches that address emerging communication load balancing issues important to next generation networks, including 5G and beyond. Finally, we highlight important challenges, open issues, and future research directions for applying RL for communication load balancing.

Keywords