IEEE Access (Jan 2024)
A Reinforcement Learning Congestion Control Algorithm for Smart Grid Networks
Abstract
Modern electrical systems are evolving with data communication networks, ushering in upgraded electrical infrastructures and enabling bidirectional communication between utility grids and consumers. The selection of communication technologies is crucial, where wireless communications have emerged as one of the main benefactor technologies due to its cost-effectiveness, scalability, and ease of deployment. Ensuring the streamlined transmission of diverse applications between residential users and utility control centers is crucial for the effective data delivery in smart grids. This paper proposes a congestion control mechanism tailored to smart grid applications using unreliable transport protocols such as UDP, which, unlike TCP, lacks inherent congestion control, presenting a significant challenge to the performance of the network. In this article, we have exploited a reinforcement learning (RL) algorithm and a deep Q-neural network (DQN), to manage congestion control under a UDP environment. In this particular instance, the DQN model learns on its own from interactions with the environment without the need to generate a dataset, rendering it apt for intricate and dynamic scenarios within smart grid communications. Our evaluation covers two scenarios: 1) a grid-like configuration, and 2) urban scenarios considering the deployment of smart meters in the cities of Montreal, Berlin and Beijing. These evaluations provide a comprehensive examination of the proposed DQN-based congestion control approach under different conditions, showing its effectiveness and adaptability. Conducting a comprehensive performance assessment in both scenarios leads to improvements in metrics such as packet delivery ratio, network throughput, fairness between different traffic sources, packet network transit time, and QoS provision.
Keywords