E3S Web of Conferences (Jan 2019)

Mixing Loop Control using Reinforcement Learning

  • Overgaard Anders,
  • Skovmose Kallesøe Carsten,
  • Dimon Bendtsen Jan,
  • Kongsgaard Nielsen Brian

DOI
https://doi.org/10.1051/e3sconf/201911105013
Journal volume & issue
Vol. 111
p. 05013

Abstract

Read online

In hydronic heating systems, a mixing loop is used to control the temperature and pressure. The task of the mixing loop is to provide enough heat power for comfort while minimizing the cost of heating the building. Control strategies for mixing loops are often limited by the fact that they are installed in a wide range of different buildings and locations without being properly tuned. To solve this problem the reinforcement learning method known as Q-learning is investigated. To improve the convergence rate this paper introduces a Gaussian kernel backup method and a generic model for pre-simulation. The method is tested via high-fidelity simulation of different types of residential buildings located in Copenhagen. It is shown that the proposed method performs better than well tuned industrial controllers.