IEEE Access (Jan 2024)
Using Graph Neural Networks in Reinforcement Learning With Application to Monte Carlo Simulations in Power System Reliability Analysis
Abstract
This paper presents a novel method for power system reliability studies that combines graph neural networks with reinforcement learning. Monte Carlo methods are the backbone of probabilistic power system reliability analyses. Recent efforts from the authors indicate that optimal power flow solvers could potentially be replaced with the policies of deep reinforcement learning agents, to obtain significant speedups of Monte Carlo simulations while retaining close to optimal accuracies. However, a limitation of that reinforcement learning approach was that the training of the agent is tightly connected to the specific case being analyzed, and the agent cannot be used as is in new, unseen cases. In this paper, we seek to overcome these issues by representing the state and actions in the power reliability environment by features in a graph, where the adjacency matrix can vary from time step to time step. By combining this with a message-passing graph neural network-based reinforcement agent, we are able to train an agent where the agent model is independent of the power system grid structure. For the actor part of this architecture, we have implemented both a deterministic agent being a variant of the Twin Delayed DDPG-algorithm, and a stochastic agent with similarities to the Soft Actor Critic-algorithm. We show that the agent can solve small extensions of a test case without having seen the new parts of the power system during training. In all of our reliability Monte Carlo simulations using this graph neural network agent, the simulation time is competitive with that based on optimal power flow, while still retaining close to optimal accuracy.
Keywords