IET Smart Grid (Feb 2021)

The challenge of controlling microgrids in the presence of rare events with deep reinforcement learning

  • Tanguy Levent,
  • Philippe Preux,
  • Gonzague Henri,
  • Réda Alami,
  • Philippe Cordier,
  • Yvan Bonnassieux

DOI
https://doi.org/10.1049/stg2.12003
Journal volume & issue
Vol. 4, no. 1
pp. 15 – 28

Abstract

Read online

Abstract The increased penetration of renewable energies and the need to decarbonise the grid come with a lot of challenges. Microgrids, power grids that can operate independently from the main system, are seen as a promising solution. They range from a small building to a neighbourhood or a village. As they co‐locate generation, storage and consumption, microgrids are often built with renewable energies. At the same time, because they can be disconnected from the main grid, they can be more resilient and less dependent on central generation. Due to their diversity and distributed nature, advanced metering and control will be necessary to maximise their potential. This paper presents a reinforcement learning algorithm to tackle the energy management of an off‐grid microgrid, represented as a Markov Decision Process. The main objective function of the proposed algorithm is to minimise the global operating cost. By nature, rare events occur in physical systems. One of the main contribution of this paper is to demonstrate how to train agents in the presence of rare events. Merging the combined experience replay method with novel methods called ‘Memory Counter’ unstucks the agent during its learning phase. Compared to baselines, an extended version of double deep Q‐network with a priority list of actions into the decision making strategy process lowers significantly the operating cost. Experiments are conducted using 2 years of real‐world data from Ecole Polytechnique in France.

Keywords