IEEE Access (Jan 2023)
DRL-Based Resource Allocation for NOMA-Enabled D2D Communications Underlay Cellular Networks
Abstract
Since the emergence of device-to-device (D2D) communications, an efficient resource allocation (RA) scheme with low-complexity suited for high variability of network environments has been continuously demanded. As a solution, we propose a RA scheme based on deep reinforcement learning (DRL) for D2D communications exploiting cluster-wise non-orthogonal multiple access (NOMA) protocol underlay cellular networks. The goal of RA is allocating transmit power and channel spectrum to D2D links to maximize a benefit. We analyze and formulate the outage of NOMA-enabled D2D links and investigate performance measures. To alleviate system overhead and computational complexity with maintaining high benefit, we propose a sub-optimal RA scheme under a centralized multi-agent DRL framework. Each agent corresponding to each D2D cluster trains its own artificial neural networks in a cyclic manner with a timing-offset. The proposed DRL-based RA scheme enables prompt allocation of resources to D2D links based on the observation of time-varying environments. The proposed RA scheme outperforms other schemes in terms of benefit, energy efficiency, fairness and coordination of D2D users, where the performance gain becomes significant when the mutual interference among user equipments is severe. In a cell of radius 100-meter with target rates for D2D and cellular links of 2 and 8 bits/s/Hz, respectively, the proposed RA scheme improves normalized benefit, energy efficiency, fairness and coordination of D2D users by 18%, 23%, 75% and 80%, respectively, over a greedy scheme. The improvements in these performance measures over a random RA scheme are 152%, 164%, 87% and 77%, respectively.
Keywords