Risks (Oct 2024)

Credit Risk Assessment and Financial Decision Support Using Explainable Artificial Intelligence

  • M. K. Nallakaruppan,
  • Himakshi Chaturvedi,
  • Veena Grover,
  • Balamurugan Balusamy,
  • Praveen Jaraut,
  • Jitendra Bahadur,
  • V. P. Meena,
  • Ibrahim A. Hameed

DOI
https://doi.org/10.3390/risks12100164
Journal volume & issue
Vol. 12, no. 10
p. 164

Abstract

Read online

The greatest technological transformation the world has ever seen was brought about by artificial intelligence (AI). It presents significant opportunities for the financial sector to enhance risk management, democratize financial services, ensure consumer protection, and improve customer experience. Modern machine learning models are more accessible than ever, but it has been challenging to create and implement systems that support real-world financial applications, primarily due to their lack of transparency and explainability—both of which are essential for building trustworthy technology. The novelty of this study lies in the development of an explainable AI (XAI) model that not only addresses these transparency concerns but also serves as a tool for policy development in credit risk management. By offering a clear understanding of the underlying factors influencing AI predictions, the proposed model can assist regulators and financial institutions in shaping data-driven policies, ensuring fairness, and enhancing trust. This study proposes an explainable AI model for credit risk management, specifically aimed at quantifying the risks associated with credit borrowing through peer-to-peer lending platforms. The model leverages Shapley values to generate AI predictions based on key explanatory variables. The decision tree and random forest models achieved the highest accuracy levels of 0.89 and 0.93, respectively. The model’s performance was further tested using a larger dataset, where it maintained stable accuracy levels, with the decision tree and random forest models reaching accuracies of 0.90 and 0.93, respectively. To ensure reliable explainable AI (XAI) modeling, these models were chosen due to the binary classification nature of the problem. LIME and SHAP were employed to present the XAI models as both local and global surrogates.

Keywords