IEEE Access (Jan 2020)

Review Study of Interpretation Methods for Future Interpretable Machine Learning

  • Jian-Xun Mi,
  • An-Di Li,
  • Li-Fang Zhou

DOI
https://doi.org/10.1109/ACCESS.2020.3032756
Journal volume & issue
Vol. 8
pp. 191969 – 191985

Abstract

Read online

In recent years, black-box models have developed rapidly because of their high accuracy. Balancing the interpretability and accuracy is increasingly important. The lack of interpretability severely limits the application of the model in academia and industry. Despite the various interpretable machine learning methods, the perspective and meaning of the interpretation are also different. We provide a review of the current interpretable methods and divide them based on the model being applied. We divide them into two categories: interpretable methods with the self-explanatory model and interpretable methods with external co-explanation. And the interpretable methods with external co-explanation are further divided into subbranch methods based on instances, SHAP, knowledge graph, deep learning, and clustering model. The classification aims to help us understand the model characteristics applied in the interpretable method better. This survey makes the researcher find a suitable model to solve interpretability problems more easily. And the comparison experiments contribute to discovering complementary features from different methods. At the same time, we explore the future challenges and trends of interpretable machine learning to promote the development of interpretable machine learning.

Keywords