Entropy (Sep 2023)

A Quantum Model of Trust Calibration in Human–AI Interactions

  • Luisa Roeder,
  • Pamela Hoyte,
  • Johan van der Meer,
  • Lauren Fell,
  • Patrick Johnston,
  • Graham Kerr,
  • Peter Bruza

DOI
https://doi.org/10.3390/e25091362
Journal volume & issue
Vol. 25, no. 9
p. 1362

Abstract

Read online

This exploratory study investigates a human agent’s evolving judgements of reliability when interacting with an AI system. Two aims drove this investigation: (1) compare the predictive performance of quantum vs. Markov random walk models regarding human reliability judgements of an AI system and (2) identify a neural correlate of the perturbation of a human agent’s judgement of the AI’s reliability. As AI becomes more prevalent, it is important to understand how humans trust these technologies and how trust evolves when interacting with them. A mixed-methods experiment was developed for exploring reliability calibration in human–AI interactions. The behavioural data collected were used as a baseline to assess the predictive performance of the quantum and Markov models. We found the quantum model to better predict the evolving reliability ratings than the Markov model. This may be due to the quantum model being more amenable to represent the sometimes pronounced within-subject variability of reliability ratings. Additionally, a clear event-related potential response was found in the electroencephalographic (EEG) data, which is attributed to the expectations of reliability being perturbed. The identification of a trust-related EEG-based measure opens the door to explore how it could be used to adapt the parameters of the quantum model in real time.

Keywords