IEEE Access (Jan 2021)

Autonomous Mobility Management for 5G Ultra-Dense HetNets via Reinforcement Learning With Tile Coding Function Approximation

  • Qianyu Liu,
  • Chiew Foong Kwong,
  • Sijia Zhou,
  • Tianhao Ye,
  • Lincan Li,
  • Saeid Pourroostaei Ardakani

DOI
https://doi.org/10.1109/ACCESS.2021.3095555
Journal volume & issue
Vol. 9
pp. 97942 – 97952

Abstract

Read online

Mobility management is an important feature in modern wireless networks that can provide seamless and ubiquitous connectivity to mobile users. Due to the dense deployment of small cells and heterogeneous network topologies, the traditional handover control method can lead to various mobility-related problems, such as frequent handovers and handover failures. On the other hand, the mobility management’s maintenance and operation cost is also increased due to increasing node density. In this paper, an autonomous mobility management control approach is proposed to increase the mobility robustness of user equipment (UE) mobility and minimize the operational cost of mobility management. The proposed method is based on reinforcement learning, which can autonomously learn an optimal handover control policy by interacting with the environment. The function approximation approach is adopted to allow reinforcement learning to process a large state and action space. A linear function approximator is used to approximate the state-action value function. Finally, the semi-gradient state-action-reward-state-action (Sarsa) method is implemented to update the approximated state-action function and learn the optimal handover control policy. The simulation results show that the proposed method can effectively improve the mobility robustness of UE under different speed ranges. Compared with the conventional reference signal received power (RSRP) based approach, the proposed approach can reduce unnecessary handovers by about 20% and latency by 58%, while achieving near zero handover failure rate, and increasing throughput by 12%.

Keywords