Current Directions in Biomedical Engineering (Sep 2022)
Explaining Machine Learning Predictions of Decision Support Systems in Healthcare
Abstract
Artificial Intelligence (AI) methods, which are often based on Machine Learning (ML) algorithms, are also applied in the healthcare domain to provide predictions to physicians and patients based on electronic health records (EHRs), such as history of laboratory values, applied procedures and diagnoses. The question about these predictions “Why Should I Trust You?” encapsulates the issue with ML black boxes. Therefore, explaining the reasons for these ML predictions to physicians and patients is crucial to allow them to decide whether the prediction is applicable or not. In this paper, we explained and evaluated two prediction explanation methods for healthcare professionals (physicians and nurses). We compared two model-agnostic explanation methods based on global feature importance and local feature importance. We evaluated the user trust and reliance (UTR) for the explanation results of each method in a user study based on real patients’ electronic health records (EHR) and the feedback of healthcare professionals. Based on the user study, we observed that both methods have strengths and weaknesses according to the patients’ data, especially based on the data size of the patient. When the amount of data is small, global feature importance is enough to use. However, when the patient’s data size is big, using a local feature importance method makes more sense. As future work, we will develop a hybrid explanation method (by combining these methods automatically with a smart setting) to obtain higher and more stable performance results in terms of user trust and reliance.
Keywords