Machine Learning and Knowledge Extraction (May 2024)

Uncertainty in XAI: Human Perception and Modeling Approaches

  • Teodor Chiaburu,
  • Frank Haußer,
  • Felix Bießmann

DOI
https://doi.org/10.3390/make6020055
Journal volume & issue
Vol. 6, no. 2
pp. 1170 – 1192

Abstract

Read online

Artificial Intelligence (AI) plays an increasingly integral role in decision-making processes. In order to foster trust in AI predictions, many approaches towards explainable AI (XAI) have been developed and evaluated. Surprisingly, one factor that is essential for trust has been underrepresented in XAI research so far: uncertainty, both with respect to how it is modeled in Machine Learning (ML) and XAI as well as how it is perceived by humans relying on AI assistance. This review paper provides an in-depth analysis of both aspects. We review established and recent methods to account for uncertainty in ML models and XAI approaches and we discuss empirical evidence on how model uncertainty is perceived by human users of XAI systems. We summarize the methodological advancements and limitations of methods and human perception. Finally, we discuss the implications of the current state of the art in model development and research on human perception. We believe highlighting the role of uncertainty in XAI will be helpful to both practitioners and researchers and could ultimately support more responsible use of AI in practical applications.

Keywords