IEEE Access (Jan 2023)

Joint Beamforming, Power Control, and Interference Coordination: A Reinforcement Learning Approach Replacing Rewards With Examples

  • Jeng-Shin Sheu,
  • Cheng-Kuei Huang,
  • Chun-Lung Tsai

DOI
https://doi.org/10.1109/ACCESS.2023.3306518
Journal volume & issue
Vol. 11
pp. 88854 – 88868

Abstract

Read online

In this paper, we consider the problem of multi-cell interference coordination by joint beamforming and power control. Recent efforts have explored the use of reinforcement learning (RL) methods to tackle this complex optimization problem. Typically, a decentralized multi-agent framework is adopted, wherein each base station operates as an independent RL agent. This distributed coordination has gained attention because designing a reward function that effectively captures the condition of the entire cellular network is challenging for single-agent RL models. However, the distributed approach introduces unique challenges, particularly the non-stationary of the multi-agent environment, as agents continually adapt their policies to interact with one another. The non-stationary environment necessitates information exchange among agents, as local observations of each agent are insufficient to fully capture the true state of the environment. Unfortunately, this information exchange incurs a significant overhead, thereby limiting data transmission capabilities. To address these challenges, we propose a novel single-agent RL approach that eliminates the need for information exchange and the conventional reward function. Instead, we leverage success examples to guide the learning process. Simulation results show that the proposed approach outperforms the existing multi-agent method and theoretical algorithm in terms of sum rates. Additionally, our approach ensures a uniform quality of service while maximizing the overall sum rate.

Keywords