IEEE Access (Jan 2020)

Power Consumption Optimization Using Gradient Boosting Aided Deep Q-Network in C-RANs

  • Yifan Luo,
  • Jiawei Yang,
  • Wei Xu,
  • Kezhi Wang,
  • Marco Di Renzo

DOI
https://doi.org/10.1109/ACCESS.2020.2978935
Journal volume & issue
Vol. 8
pp. 46811 – 46823

Abstract

Read online

Cloud Radio Access Networks (C-RANs) have the potential to enable growing data traffic in 5G networks. However, with the complex states, resource allocation in C-RANs is time-consuming and computational-expensive, making it challenging to meet the demands of energy efficiency and low latency in real-time wireless applications. In this paper, we propose a gradient boosting decision tree (GBDT)-based deep Q-network (DQN) framework for solving the dynamic resource allocation (DRA) problem in a real-time C-RAN, where the heavy computation to solve SOCP problems is cut down and significant power consumption can be saved. First, we apply the GBDT to the regression task to approximate the solutions of second order cone programming (SOCP) problem formulated from beamforming design which consumes heavy computing resources by traditional algorithms. Then, we design a deep Q-network (DQN), coming from a common deep reinforcement learning, to autonomously generate the robust policy that controls the status of remote radio heads (RRHs) and saves the power consumption in long term. The DQN deploys deep neural networks (DNN) to solve the problem of innumerable states in the real-time C-RAN system and generates the policy by observing the state and the reward engendered by GBDT. The generated policy is error-tolerant considering that the gradient boosting regression may not be strictly subject to the constraints of the original problem. Simulation results validate its advantages in terms of the performance and computational complexity for power consumption saving compared with existing methods.

Keywords