IEEE Access (Jan 2024)

Predictive Energy Management for Microgrid Using Multi-Agent Deep Deterministic Policy Gradient With Random Sampling

  • Niphon Kaewdornhan,
  • Rongrit Chatthaworn

DOI
https://doi.org/10.1109/ACCESS.2024.3416706
Journal volume & issue
Vol. 12
pp. 95071 – 95090

Abstract

Read online

In a MicroGrid (MG) equipped with a Battery Energy Storage System (BESS), an Energy Management System (EMS) plays a crucial role in predictive controlling BESS operations for optimal power flow among uncertainties from renewable energy resources and heavy loads, such as solar photovoltaic systems and electric vehicles, respectively. State-of-the-art EMS designs have integrated Deep Reinforcement Learning (DRL) for EMS development and Probabilistic Power Flow (PPF) for preventing the violation of power system constraints while accounting for all uncertainties. However, using PPF to handle uncertainties alongside training a single-agent DRL provides the optimal solution for addressing all uncertain scenarios, but not the best solution for each scenario. Moreover, employing a single-agent DRL yields a low performance in predictive controlling of BESS operations. To address these challenges, a multi-agent DRL based on Deep Deterministic Policy Gradient (DDPG) is proposed. This method divides the roles of each agent for predicting 24-hour-ahead actions in BESS control based on changing 24-hour-ahead MG behavior every hour. Furthermore, MG parameters are randomly sampled to retain MG uncertainties instead of relying on PPF for uncertainty mitigation. Consequently, multi-agent DDPG with random sampling can directly learn from the MG environment and provide the best solution for each scenario. Simulation results demonstrate that the proposed method can reduce training computational time by 92.84%, provide a higher value of the summed mean of 24-hours-ahead reward by 1.50% to 28.37%, and achieve a lower mean daily total related cost by 9.22% compared to applying the state-of-the-art method.

Keywords