International Journal of Electrical Power & Energy Systems (Oct 2024)

Automatic voltage control considering demand response: Approximatively completed observed Markov decision process-based reinforcement learning scheme

  • Yaru Gu,
  • Xueliang Huang

Journal volume & issue
Vol. 161
p. 110156

Abstract

Read online

To fully utilize the voltage regulation capacity of flexible load and distributed generations (DGs), we propose a novel Approximatively Completed Observed Markov Decision Process-based (ACOMDP-based) Reinforcement Learning (RL) (namely, ACMRL) scheme for a multi-objective Automatic Voltage Control (AVC) problem considering Differential Increment Incentive Mechanism (DIIM)-based Incentive-Based Demand Response (IBDR). Firstly, we propose a DIIM to motivate high-flexibility consumers to achieve maximum potential in real-time voltage control while ensuring the best economy. Secondly, we characterize the multi-objective AVC problem as an ACOMDP model, transformed from the Partially Observable Markov Decision Process (POMDP) model, by introducing a novel hidden system state vector that incorporates the belief state, and the high confidence probability vector. The belief state and the high-confidence probability vector describe the probability distribution extracted from the historical observed state, portraying the precise state and the uncertainty existing in the state update process. Then, the ACOMDP block is inputted into the RL block, which adopts a modified underlying network architecture with the Asynchronous Advantage Actor-Critic (MA3C) algorithm embedded with the Shared Modular Policies(SMP) module. The MA3C-based RL block, characterized by enhanced communication efficiency, enables expedited generation of optimal decision-making actions even in the face of substantial uncertainty. Case studies are conducted in a practical district in Suzhou, China, and simulation results validate the superior performance of the proposed methodology.

Keywords