Paladyn (Oct 2021)

Overtrusting robots: Setting a research agenda to mitigate overtrust in automation

  • Aroyo Alexander M.,
  • de Bruyne Jan,
  • Dheu Orian,
  • Fosch-Villaronga Eduard,
  • Gudkov Aleksei,
  • Hoch Holly,
  • Jones Steve,
  • Lutz Christoph,
  • Sætra Henrik,
  • Solberg Mads,
  • Tamò-Larrieux Aurelia

DOI
https://doi.org/10.1515/pjbr-2021-0029
Journal volume & issue
Vol. 12, no. 1
pp. 423 – 436

Abstract

Read online

There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.

Keywords