Machine Learning with Applications (Dec 2022)

Increasing trust and fairness in machine learning applications within the mortgage industry

  • W. van Zetten,
  • G.J. Ramackers,
  • H.H. Hoos

Journal volume & issue
Vol. 10
p. 100406

Abstract

Read online

The integration of machine learning in applications provides opportunities for increased efficiency in many organisations. However, the deployment of such systems is often hampered by the lack of insight into how their decisions are reached, resulting in concerns about trust and fairness. In this article, we investigate to what extent the addition of explainable AI components to ML applications can contribute to alleviating these issues. As part of this research, explainable AI functionality was developed for an existing ML model used for mortgage fraud detection at a large international financial institution based in The Netherlands A system implementing local explanation techniques was deployed to support the day-to-day work of fraud detection experts working with the model. In addition, a second system implementing global explanation techniques was developed to support the model management processes involving data-scientists, legal experts and compliance officers. A controlled experiment using actual mortgage applications was carried out to measure the effectiveness of these two systems, using both quantitative and qualitative assessment methods. Our results show that the addition of explainable AI functionality results in a statistically significant improvement in the levels of trust and usability by its daily users. The explainable AI system implementing global interpretability was found to considerably increase confidence in the ability to perform the processes focused on compliance and fairness. In particular, bias detection towards demographic groups successfully aided in the identification and removal of bias towards applicants with a migration background.

Keywords