IEEE Access (Jan 2025)

Balancing Explainability and Privacy in Bank Failure Prediction: A Differentially Private Glass-Box Approach

  • Junyoung Byun,
  • Jaewook Lee,
  • Hyeongyeong Lee,
  • Bumho Son

DOI
https://doi.org/10.1109/ACCESS.2024.3523967
Journal volume & issue
Vol. 13
pp. 1546 – 1565

Abstract

Read online

Predicting bank failures is a critical task requiring balancing the need for model explainability with the necessity of preserving data privacy. Traditional machine learning models often lack transparency, which poses challenges for stakeholders who need to understand the factors leading to predictions. In this study, we employ differentially private glass-box models, namely Explainable Boosting Machine (EBM) and Neural Additive Models (NAM), to address these issues. We analyzed data from 21,243 American banks spanning from 1969 to 2021, focusing on key financial ratios. By applying Differential Privacy (DP) to these models, we aimed to protect sensitive financial data while evaluating the trade-offs between privacy, accuracy, and explainability. Our main findings are as follows: 1) In the absence of privacy constraints, the models consistently identified Asset Turnover, Total Debt / Invested Capital, and ROE ratios as the most influential factors in predicting bank failure, in that order; 2) When the privacy budget $\epsilon \leq 1$ , only EBM maintained significant performance; 3) The reduction in explainability due to privacy protection was more pronounced for variables with initially lower explanatory power, while Asset Turnover retained its explanatory power even at $\epsilon = 0.01$ . These findings provide valuable insights for banks, policymakers, and investors, suggesting that glass-box models can offer a promising solution for reliable and explainable bank failure prediction under privacy constraints.

Keywords