IEEE Access (Jan 2024)
Judgement-Based Deep Q-Learning Framework for Interference Management in Small Cell Networks
Abstract
Small cell technology for future 6G networks allows network operators to increase network capacity by reducing the distance between Base Stations (BSs) and users, thereby increasing wireless channel gains. However, it also leads to significant computational complexity to optimally mitigate inter-cell and/or inter-beam interference by dynamically managing beamforming, transmit power and user scheduling. In this paper, we formulate an optimization problem aiming to maximize the sum utility of users where decision variables are beam pattern selection, user scheduling and transmit power allocation in small cell networks. Next, we capture room for performance enhancement and low computational complexity that existing studies have overlooked by proposing i) a novel decision making process of DQN (Deep Q-Network) to jointly learn all decision variables in a single DRL (Deep Reinforcement Learning) model without a curse of dimensionality by adopting a user-specific state to each agent with distributed interference approximation meaning that interferences to all users in all neighbor BSs can be abstracted by a single user, and ii) a novel reward design so that the reward is judged by the result of a practical optimization-based solution. Finally, we show the superiority of the proposed DQL (Deep Q-Learning) algorithm compared to the existing interference management algorithms via simulations and provide insights for network providers who will leverage DQL in future small cell networks through in-depth performance analysis compared with conventional DQL algorithm and practical optimization algorithms.
Keywords