IEEE Access (Jan 2025)

Energy Management in Microgrids Using Model-Free Deep Reinforcement Learning Approach

  • Odia A. Talab,
  • Isa Avci

DOI
https://doi.org/10.1109/ACCESS.2025.3525843
Journal volume & issue
Vol. 13
pp. 5871 – 5891

Abstract

Read online

Electric power systems are undergoing rapid modernization driven by advancements in smart-grid technologies, and microgrids (MGs) play a crucial role in integrating renewable energy sources (RESs), such as wind and solar energy, into existing grids. MGs offer a flexible and efficient framework for accommodating dispersed energy resources. However, the intermittent nature of renewable sources, coupled with the rising demand for Electric Vehicles (EVs) and fast charging stations (FCSs), poses significant challenges to the stability and efficiency of microgrid (MG) operations. These challenges stem from the uncertainties in both energy generation and fluctuating demand patterns, making efficient energy management in MG a complex task. This study introduces a novel model-free strategy for real-time energy management in MG aimed at addressing uncertainties without the need for traditional uncertainty modeling techniques. Unlike conventional methods, the proposed approach enhances MG performance by minimizing power losses and operational costs. The problem is formulated as a Markov Decision Process (MDP) with well-defined objectives. To optimize decision-making, an actor-critic-based Deep Deterministic Policy Gradient (DDPG) algorithm is developed, leveraging reinforcement learning (RL) to adapt dynamically to changing system conditions. Comprehensive numerical simulations demonstrated the effectiveness of the proposed strategy. The results show a total cost of 51.8770 €ct/kWh, representing a reduction of 3.19% compared to the Dueling Deep Q Network (Dueling DQN) and 4% compared to the Deep Q Network (DQN). This highlights the robustness and scalability of the proposed model-free approach for modern MG energy management.

Keywords