Results in Engineering (Jun 2024)
Self-healing radial distribution network reconfiguration based on deep reinforcement learning
Abstract
Distribution network reconfiguration (DNR) has long been used to minimize losses in normal conditions and restore out-of-service areas in fault situations. However, the integration of renewable resources into power systems has made operating strategies more challenging, requiring fast and responsive approaches to handle abrupt interruptions. Traditional DNR methods rely on mathematical programming, heuristic and meta-heuristic methods that cannot generalize unseen generation and load profiles, and may require significant computational time to converge, posing an overrun risk for real-time execution. Recently, deep reinforcement learning (DRL) has been shown to be effective in optimizing DNR in normal conditions with calculation times of only a few milliseconds. However, previously proposed dynamic DNR approaches are not able to self-heal and restore in fault conditions, where timely restoration is critical for customer satisfaction. To address this gap, we propose a restorative dynamic distribution network reconfiguration (RDDNR) framework that enables continuous reconfiguration after faults, enabling the fast restoration of out-of-service areas. Our proposed framework was tested on two balanced test feeders and demonstrated competitiveness regardless of operating states. By leveraging RDDNR, distribution system operators can effectively restore service in a timely manner, improving overall grid resilience and customer satisfaction.