Energy Reports (Nov 2022)
A double-deck deep reinforcement learning-based energy dispatch strategy for an integrated electricity and district heating system embedded with thermal inertial and operational flexibility
Abstract
With the high penetration of wind power connected to the integrated electricity and district heating systems (IEDHSs), wind power curtailment still inevitably occurs in the traditional IEDHS dispatch. Focusing on the flexibilities of the IEDHS is considered to be a beneficial solution to further promote the integration of wind power. In the district heating network, the thermal inertia is utilized to improve such flexibility. Therefore, an IEDHS dispatch model considering the thermal inertia of district heating network and operational flexibility of generators is proposed in this paper. In addition, to avoid the tendency of traditional reinforcement learning (RL) to fall into local optimality when solving high-dimensional problems, a double-deck deep RL (D3RL) framework is proposed in this study. D3RL combines with a deep deterministic policy gradient (DDPG) agent in the upper level and a conventional optimization solver in the lower level to simplify the action and reward design. In the simulation, the proposed model considering the transmission time delay characteristics of the district heating network and the operational flexibility of generators is verified in four scheduling scenarios. Besides, the superiority of the proposed D3RL method is validated in a larger IEDHS. Numerical results show that the considered scheduling model can use the heat storage characteristics of heating pipelines, reduce operating costs, improve the operational flexibility and encourage wind power utilization. Compared with traditional RL, the proposed optimization method can improve its training speed and convergence performance.