Machines (Dec 2023)

Anthropomorphic Design and Self-Reported Behavioral Trust: The Case of a Virtual Assistant in a Highly Automated Car

  • Clarisse Lawson-Guidigbe,
  • Kahina Amokrane-Ferka,
  • Nicolas Louveton,
  • Benoit Leblanc,
  • Virgil Rousseaux,
  • Jean-Marc André

DOI
https://doi.org/10.3390/machines11121087
Journal volume & issue
Vol. 11, no. 12
p. 1087

Abstract

Read online

The latest advances in car automation present new challenges in vehicle–driver interactions. Indeed, acceptance and adoption of high levels of automation (when full control of the driving task is given to the automated system) are conditioned by human factors such as user trust. In this work, we study the impact of anthropomorphic design on user trust in the context of a highly automated car. A virtual assistant was designed using two levels of anthropomorphic design: “voice-only” and “voice with visual appearance”. The visual appearance was a three-dimensional model, integrated as a hologram in the cockpit of a driving simulator. In a driving simulator study, we compared the three interfaces: two versions of the virtual assistant interface and the baseline interface with no anthropomorphic attributes. We measured trust versus perceived anthropomorphism. We also studied the evolution of trust throughout a range of driving scenarios. We finally analyzed participants’ reaction time to takeover request events. We found a significant correlation between perceived anthropomorphism and trust. However, the three interfaces tested did not significantly differentiate in terms of perceived anthropomorphism while trust converged over time across all our measurements. Finally, we found that the anthropomorphic assistant positively impacts reaction time for one takeover request scenario. We discuss methodological issues and implication for design and further research.

Keywords