International Journal of Electrical Power & Energy Systems (Mar 2025)
Reactive power optimization via deep transfer reinforcement learning for efficient adaptation to multiple scenarios
Abstract
Fast reactive power optimization policy-making for various operating scenarios is an important part of power system dispatch. Existing reinforcement learning algorithms alleviate the computational complexity in optimization but suffer from the inefficiency of model retraining for different operating scenarios. To solve the above problems, this paper raises a data-efficient transfer reinforcement learning-based reactive power optimization framework. The proposed framework transfers knowledge through two phases: generic state representation in the original scenario and specific dynamic learning in multiple target scenarios. A Q-network structure that separately extracts state and action dynamics is designed to learn generalizable state representations and enable generic knowledge transfer. Supervised learning is applied in specific dynamic learning for extracting unique dynamics from offline data, which improves data efficiency and speeds up knowledge transfer. Finally, the proposed framework is tested on the IEEE 39-bus system and the realistic Guangdong provincial power grid, demonstrating its effectiveness and reliability.