International Journal of Electrical Power & Energy Systems (Jun 2025)

Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic model

  • Ahmed Sayed,
  • Khaled Al Jaafari,
  • Xian Zhang,
  • Hatem Zeineldin,
  • Ahmed Al-Durra,
  • Guibin Wang,
  • Ehab Elsaadany

DOI
https://doi.org/10.1016/j.ijepes.2025.110621
Journal volume & issue
Vol. 167
p. 110621

Abstract

Read online

The transition to decarbonized energy systems presents significant operational challenges due to increased uncertainties and complex dynamics. Deep reinforcement learning (DRL) has emerged as a powerful tool for optimizing power system operations. However, most existing DRL approaches rely on approximated data-driven critic networks, requiring numerous risky interactions to explore the environment and often facing estimation errors. To address these limitations, this paper proposes an efficient DRL algorithm with a physics-driven critic model, namely a differentiable holomorphic embedding load flow model (D-HELM). This approach enables accurate policy gradient computation through a differentiable loss function based on system states of realized uncertainties, simplifying both the replay buffer and the learning process. By leveraging continuation power flow principles, D-HELM ensures operable, feasible solutions while accelerating gradient steps through simple matrix operations. Simulation results across various test systems demonstrate the computational superiority of the proposed approach, outperforming state-of-the-art DRL algorithms during training and model-based solvers in online operations. This work represents a potential breakthrough in real-time energy system operations, with extensions to security-constrained decision-making, voltage control, unit commitment, and multi-energy systems.

Keywords