IEEE Access (Jan 2025)

A Custom Reinforcement Learning Environment for Hybrid Renewable Energy Systems: Design and Implementation

  • Dalton F. Guedes Filho,
  • Marcelo A. Moret,
  • Erick G. Sperandio Nascimento

DOI
https://doi.org/10.1109/access.2025.3593064
Journal volume & issue
Vol. 13
pp. 133984 – 133993

Abstract

Read online

We present HybridEnergyEnv, an open-source, Gym-style simulation environment designed for reinforcement learning (RL) research in hybrid renewable energy systems (HRES) combining wind, solar, and battery storage. The environment incorporates realistic component models, including intermittent renewable generation profiles, a synthetic electricity price signal inversely correlated with renewable availability, and a detailed Battery Energy Storage System (BESS) model accounting for state-of-charge (SoC) dynamics, self-discharge, efficiency losses, thermal derating, and rainflow-based capacity degradation. To validate the framework, we evaluate three dispatch strategies implemented with algorithms available in the Stable-Baselines3 (SB3) library: Proximal Policy Optimization (PPO), Advantage Actor-Critic (A2C), and Double Deep Q-Network (DDQN). Results show that DRL-based policies increase operational revenue by up to 10.05% and reduce curtailment by up to 84.60% compared to the no-storage baseline. Additionally, DDQN achieves the longest episode durations and highest rewards during training, indicating greater stability under strict curtailment constraints. We describe the environment architecture, component models, and API, demonstrating the potential of HybridEnergyEnv as a high-fidelity, extensible platform for the development of intelligent, degradation-aware dispatch strategies in modern power systems.

Keywords