Journal on Interactive Systems (Nov 2022)

Contrasting Explain-ML with Interpretability Machine Learning Tools in Light of Interactive Machine Learning Principles

  • Bárbara Gabrielle C. O. Lopes,
  • Liziane Santos Soares,
  • Raquel Oliveira Prates,
  • Marcos André Gonçalves

DOI
https://doi.org/10.5753/jis.2022.2556
Journal volume & issue
Vol. 13, no. 1
pp. 313 – 334

Abstract

Read online

The way Complex Machine Learning (ML) models generate their results is not fully understood, including by very knowledgeable users. If users cannot interpret or trust the predictions generated by the model, they will not use them. Furthermore, the human role is often not properly considered in the development of ML systems. In this article, we present the design, implementation and evaluation of Explain-ML, an Interactive Machine Learning (IML) system for Explainable Machine Learning that follows the principles of Human-Centered Machine Learning (HCML). We assess the user experience with the Explain-ML interpretability strategies, contrasting them with the analysis of how other IML tools address the IML principles. To do so, we have conducted an analysis of the results of the evaluation of Explain-ML with potential users in light of principles for IML systems design and a systematic inspection of three other tools – Rulematrix, Explanation Explorer and ATMSeer – using the Semiotic Inspection Method (SIM). Our results generated positive indicators regarding Explain-ML and the process that guided its development. Our analyses also highlighted aspects of the IML principles that are relevant from the users’ perspective. By contrasting the results with Explain-ML and SIM inspections of the other tools we were able to identify common interpretability strategies. We believe that the results reported in this work contribute to the understanding and consolidation of the IML principles, ultimately advancing the knowledge in HCML.

Keywords