Energies (Feb 2024)

A Future Direction of Machine Learning for Building Energy Management: Interpretable Models

  • Luca Gugliermetti,
  • Fabrizio Cumo,
  • Sofia Agostinelli

DOI
https://doi.org/10.3390/en17030700
Journal volume & issue
Vol. 17, no. 3
p. 700

Abstract

Read online

Machine learning (ML) algorithms are now part of everyday life, as many technological devices use these algorithms. The spectrum of uses is wide, but it is evident that ML represents a revolution that may change almost every human activity. However, as for all innovations, it comes with challenges. One of the most critical of these challenges is providing users with an understanding of how models’ output is related to input data. This is called “interpretability”, and it is focused on explaining what feature influences a model’s output. Some algorithms have a simple and easy-to-understand relationship between input and output, while other models are “black boxes” that return an output without giving the user information as to what influenced it. The lack of this knowledge creates a truthfulness issue when the output is inspected by a human, especially when the operator is not a data scientist. The Building and Construction sector is starting to face this innovation, and its scientific community is working to define best practices and models. This work is intended for developing a deep analysis to determine how interpretable ML models could be among the most promising future technologies for the energy management in built environments.

Keywords