Technologies (Nov 2022)

Modelling the Trust Value for Human Agents Based on Real-Time Human States in Human-Autonomous Teaming Systems

  • Chin-Teng Lin,
  • Hsiu-Yu Fan,
  • Yu-Cheng Chang,
  • Liang Ou,
  • Jia Liu,
  • Yu-Kai Wang,
  • Tzyy-Ping Jung

DOI
https://doi.org/10.3390/technologies10060115
Journal volume & issue
Vol. 10, no. 6
p. 115

Abstract

Read online

The modelling of trust values on agents is broadly considered fundamental for decision-making in human-autonomous teaming (HAT) systems. Compared to the evaluation of trust values for robotic agents, estimating human trust is more challenging due to trust miscalibration issues, including undertrust and overtrust problems. From a subjective perception, human trust could be altered along with dynamic human cognitive states, which makes trust values hard to calibrate properly. Thus, in an attempt to capture the dynamics of human trust, the present study evaluated the dynamic nature of trust for human agents through real-time multievidence measures, including human states of attention, stress and perception abilities. The proposed multievidence human trust model applied an adaptive fusion method based on fuzzy reinforcement learning to fuse multievidence from eye trackers, heart rate monitors and human awareness. In addition, fuzzy reinforcement learning was applied to generate rewards via a fuzzy logic inference process that has tolerance for uncertainty in human physiological signals. The results of robot simulation suggest that the proposed trust model can generate reliable human trust values based on real-time cognitive states in the process of ongoing tasks. Moreover, the human-autonomous team with the proposed trust model improved the system efficiency by over 50% compared to the team with only autonomous agents. These results may demonstrate that the proposed model could provide insight into the real-time adaptation of HAT systems based on human states and, thus, might help develop new ways to enhance future HAT systems better.

Keywords