IEEE Access (Jan 2024)
Enhancing Energy-Efficient Power Allocation for IoT URLLC Through Action Space Reduction in Energy Harvesting Devices
Abstract
With the Internet of Things (IoT) rapidly increasing in popularity, it is necessary to progress Beyond fifth-generation (B5G) applications to support access to a large number of devices under the strict Ultra-Reliability and Low-Latency Communication (URLLC) requirements, which are intimately tied to strict delay requirements. This article focuses on minimizing co-channel interference, implementing efficient transmitter power, satisfying the Quality-of-Service (QoS), and assigning the energy harvesting requirement of radio frequencies using a Power Splitting Ratio (PSR) to maximize energy efficiency. This article proposes an adaptive approach algorithm for power allocation and PSR to reduce energy consumption caused by a nonconvex optimization problem, to achieve a high strict level of URLLC, QoS aware immediate reward, and huge connection. In addition, the proposed Deep Reinforcement Learning-Based Reduction Action Space (DRL-RAS) allows achieving a high degree of massive access to a huge number of devices by tackling the rate requirement of IoT devices with high intelligence to increase learning efficiency by maximizing the immediate reward function and ensuring strict URLLC. The simulation’s results demonstrate how DRL-RAS decreases co-channel interference based on decision selection to provide strict URLLC. The performance of energy efficiency and QoS are based on achieving a highly strict level of URLLC. The suggested DRL-RAS training strategy performs better, gaining QoS satisfaction levels of 17.21% and 8.67% when the needed QoS rises from 0.88 bits/sec/Hz to 1 bit/sec/Hz. Moreover, the proposed DRL-RAS eliminates error prediction during the training process by updating the weight of DRL-RAS to minimize the loss function and maximize energy efficiency.
Keywords