Energies (Feb 2023)

Local Interpretable Explanations of Energy System Designs

  • Jonas Hülsmann,
  • Julia Barbosa,
  • Florian Steinke

DOI
https://doi.org/10.3390/en16052161
Journal volume & issue
Vol. 16, no. 5
p. 2161

Abstract

Read online

Optimization-based design tools for energy systems often require a large set of parameter assumptions, e.g., about technology efficiencies and costs or the temporal availability of variable renewable energies. Understanding the influence of all these parameters on the computed energy system design via direct sensitivity analysis is not easy for human decision-makers, since they may become overloaded by the multitude of possible results. We thus propose transferring an approach from explaining complex neural networks, so-called locally interpretable model-agnostic explanations (LIME), to this related problem. Specifically, we use variations of a small number of interpretable, high-level parameter features and sparse linear regression to obtain the most important local explanations for a selected design quantity. For a small bottom-up optimization model of a grid-connected building with photovoltaics, we derive intuitive explanations for the optimal battery capacity in terms of different cloud characteristics. For a larger application, namely a national model of the German energy transition until 2050, we relate path dependencies of the electrification of the heating and transport sector to the correlation measures between renewables and thermal loads. Compared to direct sensitivity analysis, the derived explanations are more compact and robust and thus more interpretable for human decision-makers.

Keywords