Energies (Dec 2019)

Multi-Agent Reinforcement Learning Approach for Residential Microgrid Energy Scheduling

  • Xiaohan Fang,
  • Jinkuan Wang,
  • Guanru Song,
  • Yinghua Han,
  • Qiang Zhao,
  • Zhiao Cao

DOI
https://doi.org/10.3390/en13010123
Journal volume & issue
Vol. 13, no. 1
p. 123

Abstract

Read online

Residential microgrid is widely considered as a new paradigm of the home energy management system. The complexity of Microgrid Energy Scheduling (MES) is increasing with the integration of Electric Vehicles (EVs) and Renewable Generations (RGs). Moreover, it is challenging to determine optimal scheduling strategies to guarantee the efficiency of the microgrid market and to balance all market participants’ benefits. In this paper, a Multi-Agent Reinforcement Learning (MARL) approach for residential MES is proposed to promote the autonomy and fairness of microgrid market operation. First, a multi-agent based residential microgrid model including Vehicle-to-Grid (V2G) and RGs is constructed and an auction-based microgrid market is built. Then, distinguish from Single-Agent Reinforcement Learning (SARL), MARL can achieve distributed autonomous learning for each agent and realize the equilibrium of all agents’ benefits, therefore, we formulate an equilibrium-based MARL framework according to each participant’ market orientation. Finally, to guarantee the fairness and privacy of the MARL process, we proposed an improved optimal Equilibrium Selection-MARL (ES-MARL) algorithm based on two mechanisms, private negotiation and maximum average reward. Simulation results demonstrate the overall performance and efficiency of proposed MARL are superior to that of SARL. Besides, it is verified that the improved ES-MARL can get higher average profit to balance all agents.

Keywords