Risk Sciences (Jan 2026)

Interpretability in deep learning for finance: A case study for the Heston model

  • Damiano Brigo,
  • Xiaoshan Huang,
  • Andrea Pallavicini,
  • Haitz Sáez de Ocáriz Borde

DOI
https://doi.org/10.1016/j.risk.2025.100030
Journal volume & issue
Vol. 2
p. 100030

Abstract

Read online

Deep learning is a powerful tool whose applications in quantitative finance are growing every day. Yet, artificial neural networks behave as black boxes, and this introduces risks, hindering validation and accountability processes. Being able to interpret the inner functioning and the input–output relationship of these networks has become key for the acceptance of such tools and for reducing the risks inherent in their use. In this study, we focused on the calibration of a stochastic volatility model, a subject recently tackled by deep-learning algorithms. We analyzed the Heston model in particular, as this model’s properties are well known, resulting in an ideal benchmark case. We investigated the capability of local and global strategies derived from cooperative game theory to explain the trained neural networks, and we found that global strategies, such as Shapley values, can be effectively used in practice. Our analysis also highlighted that Shapley values may help choose the network architecture, as we found that fully connected neural networks perform better than convolutional neural networks in predicting and interpreting the Heston model prices to parameters’ relationship.

Keywords