IET Renewable Power Generation (Feb 2024)

Reinforcement learning based two‐timescale energy management for energy hub

  • Jinfan Chen,
  • Chengxiong Mao,
  • Guanglin Sha,
  • Wanxing Sheng,
  • Hua Fan,
  • Dan Wang,
  • Shushan Qiu,
  • Yunzhao Wu,
  • Yao Zhang

DOI
https://doi.org/10.1049/rpg2.12911
Journal volume & issue
Vol. 18, no. 3
pp. 476 – 488

Abstract

Read online

Abstract Maintaining energy balance and economical operation is significant for energy hub (EH) which serves as the central component. Implementing real‐time regulation for heating and cooling equipment within the EH is challenging due to their slow response time in response to the stochastic fluctuation in renewable energy sources and demands while the opposite is true for electric energy storage equipment (EST), a conventional single timescale energy management strategy is no longer sufficient to take into account the operating characteristics of all equipment. With this motivation, this study proposes a deep reinforcement learning based two‐timescale energy management strategy for EH, which controls heating & cooling equipment on a long timescale of 1 h, and EST on a short timescale of 15 min. The actions of the EST are modelled as discrete to reduce the action spaces, and the discrete‐continuous hybrid action sequential TD3 model is proposed to address the problem of handling both discrete and continuous actions in long timescale policy. A joint training approach based on the centralized training framework is proposed to learn multiple levels of policies in parallel. The case studies demonstrate that the proposed strategy reduces the economic cost and carbon emissions by 1%, and 0.5% compared to the single time‐scale strategy respectively.

Keywords