Jisuanji kexue (Nov 2021)

Reinforcement Learning Based Dynamic Basestation Orchestration for High Energy Efficiency

  • ZENG De-ze, LI Yue-peng, ZHAO Yu-yang, GU Lin

DOI
https://doi.org/10.11896/jsjkx.201000008
Journal volume & issue
Vol. 48, no. 11
pp. 363 – 371

Abstract

Read online

The mutual promotion of mobile communication technology and mobile communication industry has achieved unprecedented prosperity in the mobile Internet era.The explosion of mobile devices,expansion of the network scale,improvement of service requirements are driving the next technological revolution in wireless networks.5G meets the requirements for the thousand-fold improvement of service performance through intensive network deployment,but co-channel interference and bursty request problems make the energy consumption of this solution very huge.In order to support 5G network to provide energy-efficient and high-performance services,it is imperative to upgrade and improve the management scheme of mobile networks.In this article,we use a short-cycle management framework with cache queues to achieve agile and smooth management of request burst scenarios to avoid dramatic fluctuations in service quality due to request bursts.We use deep reinforcement learning to learn the user distribution and communication needs,and infer the load change rules of the base station,and then realize the pre-scheduling and pre-allocation of energy,while ensuring the quality of service and improving the energy efficiency.Compared with the classic DQN algorithm,the two-buffer DQN algorithm proposed in this paper can provide nearly 20% acceleration in convergence.In terms of decision performance,it can save 4.8% energy consumption compared to the currently widely used keep on strategy.

Keywords