Computers in Human Behavior Reports (Aug 2021)

Intelligent autonomous agents and trust in virtual reality

  • Ningyuan Sun,
  • Jean Botev

Journal volume & issue
Vol. 4
p. 100146

Abstract

Read online

Intelligent autonomous agents (IAA) are proliferating and rapidly evolving due to the exponential growth in computational power and recent advances, for instance, in artificial intelligence research. Ranging from chatbots, over personal virtual assistants and medical decision-aiding systems, to self-driving or self-piloting systems, whether unbeknownst to the users or not, IAA are increasingly integrated into many aspects of daily life. Despite this technological development, many people remain skeptical of such agents. Conversely, others might have excessive confidence in them. Therefore, establishing an appropriate level of trust is crucial to the successful deployment of IAA in everyday contexts. Virtual Reality (VR) is another domain where IAA play a significant role, yet its experiential and immersive character particularly allows for new ways of interaction and tackling trust-related issues. In this article, we provide an overview of the numerous factors involved in establishing trust between users and IAA, spanning scientific disciplines as diverse as psychology, philosophy, sociology, computer science, and economics. Focusing on VR, we discuss the different types and definitions of trust and identify foundational factors classified into three interrelated dimensions: Human-Technology, Human-System, and Interpersonal. Based on this taxonomy, we identify open issues and a research agenda towards facilitating the study of trustful interaction and collaboration between users and IAA in VR settings.

Keywords