IEEE Access (Jan 2022)
A Deep Learning-Based Transmission Scheme Using Reduced Feedback for D2D Networks
Abstract
In this study, we investigate frequency division duplex (FDD)-based overlay device-to-device (D2D) communication networks. In overlay D2D networks, D2D communication uses a dedicated radio resource to eliminate the cross-interference with cellular communication and multiple D2D devices share the dedicated radio resource to resolve the scarcity of radio spectrum, thereby causing co-channel interference, one of the challenging problems in D2D communication networks. Various radio resource management problems for D2D communication networks can’t be solved by conventional optimization methods because they are modelled by non-convex optimization. Recently, various studies have relied on deep reinforcement learning (DRL) as an alternative method to maximize the performance of D2D communication networks overcoming co-channel interference. These studies showed that DRL-based radio resource management schemes can achieve almost optimal performance, and even outperform the state-of-art schemes based on non-convex optimization. Most of DRL-based transmission schemes inevitably require feedback information from D2D receivers to build input states, especially in FDD networks where the channel reciprocity between uplink and downlink is not valid. However, the effect of feedback overhead has not been well investigated in previous studies using DRL, and none of the studies reported on reducing the feedback overhead of DRL-based transmission schemes for FDD-based D2D networks. In this study, we propose a DRL-based transmission scheme for FDD-based D2D networks where input states are built by using reduced feedback information, thereby reducing feedback overhead. The proposed DRL-based transmission scheme using reduced feedback information achieves the same average sum-rates as that using full feedback, while reducing the feedback overhead significantly.
Keywords