Frontiers in Energy Research (Sep 2023)

Risk-averse stochastic dynamic power dispatch based on deep reinforcement learning with risk-oriented Graph-Gan sampling

  • Wenqi Huang,
  • Zhen Dai,
  • Jiaxuan Hou,
  • Lingyu Liang,
  • Yiping Chen,
  • Zhiwei Chen,
  • Zhenning Pan

DOI
https://doi.org/10.3389/fenrg.2023.1272216
Journal volume & issue
Vol. 11

Abstract

Read online

The increasing penetration of renewable energy sources (RES) brings volatile stochasticity, which significantly challenge the optimal dispatch of power systems. This paper aims at developing a cost-effective and robust policy for stochastic dynamic optimization of power systems, which improves the economy as well as avoiding the risk of high costs in some critical scenarios with small probability. However, it is hard for existing risk-neutral methods to incorporate risk measure since most samples are normal. For this regard, a novel risk-averse policy learning approach based on deep reinforcement learning with risk-oriented sampling is proposed. Firstly, a generative adversarial network (GAN) with graph convolutional neural network (GCN) is proposed to learn from historical data and achieve risk-oriented sampling. Specifically, system state is modelled as graph data and GCN is employed to capture the underlying correlation of the uncertainty corresponding to the system topology. Risk knowledge is the embedded to encourage more critical scenarios are sampled while aligning with historical data distributions. Secondly, a modified deep reinforcement learning (DRL) with risk-measure under soft actor critic framework is proposed to learn the optimal dispatch policy from sampling data. Compared with the traditional deep reinforcement learning which is risk-neutral, the proposed method is more robust and adaptable to uncertainties. Comparative simulations verify the effectiveness of the proposed method.

Keywords