PeerJ Computer Science (Jan 2022)

Human-robot interaction: the impact of robotic aesthetics on anticipated human trust

  • Joel Pinney,
  • Fiona Carroll,
  • Paul Newbury

DOI
https://doi.org/10.7717/peerj-cs.837
Journal volume & issue
Vol. 8
p. e837

Abstract

Read online Read online

Background Human senses have evolved to recognise sensory cues. Beyond our perception, they play an integral role in our emotional processing, learning, and interpretation. They are what help us to sculpt our everyday experiences and can be triggered by aesthetics to form the foundations of our interactions with each other and our surroundings. In terms of Human-Robot Interaction (HRI), robots have the possibility to interact with both people and environments given their senses. They can offer the attributes of human characteristics, which in turn can make the interchange with technology a more appealing and admissible experience. However, for many reasons, people still do not seem to trust and accept robots. Trust is expressed as a person’s ability to accept the potential risks associated with participating alongside an entity such as a robot. Whilst trust is an important factor in building relationships with robots, the presence of uncertainties can add an additional dimension to the decision to trust a robot. In order to begin to understand how to build trust with robots and reverse the negative ideology, this paper examines the influences of aesthetic design techniques on the human ability to trust robots. Method This paper explores the potential that robots have unique opportunities to improve their facilities for empathy, emotion, and social awareness beyond their more cognitive functionalities. Through conducting an online questionnaire distributed globally, we explored participants ability and acceptance in trusting the Canbot U03 robot. Participants were presented with a range of visual questions which manipulated the robot’s facial screen and asked whether or not they would trust the robot. A selection of questions aimed at putting participants in situations where they were required to establish whether or not to trust a robot’s responses based solely on the visual appearance. We accomplished this by manipulating different design elements of the robots facial and chest screens, which influenced the human-robot interaction. Results We found that certain facial aesthetics seem to be more trustworthy than others, such as a cartoon face versus a human face, and that certain visual variables (i.e., blur) afforded uncertainty more than others. Consequentially, this paper reports that participant’s uncertainties of the visualisations greatly influenced their willingness to accept and trust the robot. The results of introducing certain anthropomorphic characteristics emphasised the participants embrace of the uncanny valley theory, where pushing the degree of human likeness introduced a thin line between participants accepting robots and not. By understanding what manipulation of design elements created the aesthetic effect that triggered the affective processes, this paper further enriches our knowledge of how we might design for certain emotions, feelings, and ultimately more socially acceptable and trusting robotic experiences.

Keywords