IEEE Access (Jan 2020)

Optimizing the Post-Disaster Control of Islanded Microgrid: A Multi-Agent Deep Reinforcement Learning Approach

  • Huanhuan Nie,
  • Ying Chen,
  • Yue Xia,
  • Shaowei Huang,
  • Bingqian Liu

DOI
https://doi.org/10.1109/ACCESS.2020.3018142
Journal volume & issue
Vol. 8
pp. 153455 – 153469

Abstract

Read online

Extreme disasters may cause the power supply to the distribution system (DS) to be interrupted. The DS is forced to operate in island mode and forms an islanded microgrid (MG). In order to improve the post-disaster resilience of the DS and to provide longer power supply for as many loads as possible with limited generation resources, this paper proposes a multi-agent deep reinforcement learning (DRL) method which realizes a dual control on the source and load sides of the MG. The problem of resilience improvement is converted to a sequential decision making problem, where the objective is to maximize the cumulative MG utility value over the power outage duration. A multi-agent DRL model is proposed to solve the sequential decision making problem. A dual control policy including energy storage management and load shedding strategy is put forward to maximize the utility value of the MG. A reinforcement learning (RL) environment based on OpenAI and OpenDSS for islanded MG is constructed as a simulator, which has a general interface compatible with, and also can be published to, OpenAI Gym. Numerical simulations are performed for an MG equipped with wind turbines, diesel generators, and storage devices to validate the effectiveness of the proposed method. The influences of available generation resources and power outage duration on the control policy are discussed, which validates the strong adaptability of the proposed method in different conditions.

Keywords