Frontiers of Architectural Research (Apr 2023)

Energy-efficient virtual sensor-based deep reinforcement learning control of indoor CO2 in a kindergarten

  • Patrick Nzivugira Duhirwe,
  • Jack Ngarambe,
  • Geun Young Yun

Journal volume & issue
Vol. 12, no. 2
pp. 394 – 409

Abstract

Read online

High concentrations of indoor CO2 pose severe health risks to building occupants. Often, mechanical equipment is used to provide sufficient ventilation as a remedy to high indoor CO2 concentrations. However, such equipment consumes large amounts of energy, substantially increasing building energy consumption. In the end, the issue becomes an optimization problem that revolves around maintaining CO2 levels below a certain threshold while utilizing the minimum amount of energy possible. To that end, we propose an intelligent approach that consists of a supervised learning-based virtual sensor that interacts with a deep reinforcement learning (DRL)-based control to efficiently control indoor CO2 while utilizing the minimum amount of energy possible. The data used to train and test the DRL agent is based on a 3-month field experiment conducted at a kindergarten equipped with a heat recovery ventilator. The results show that, unlike the manual control initially employed at the kindergarten, the DRL agent could always maintain the CO2 concentrations below sufficient levels. Furthermore, a 58% reduction in the energy consumption of the ventilator under the DRL control compared to the manual control was estimated. The demonstrated approach illustrates the potential leveraging of Internet of Things and machine learning algorithms to create comfortable and healthy indoor environments with minimal energy requirements.

Keywords