Journal of Medical Internet Research (Apr 2022)

The Effects of Health Care Chatbot Personas With Different Social Roles on the Client-Chatbot Bond and Usage Intentions: Development of a Design Codebook and Web-Based Study

  • Marcia Nißen,
  • Dominik Rüegger,
  • Mirjam Stieger,
  • Christoph Flückiger,
  • Mathias Allemand,
  • Florian v Wangenheim,
  • Tobias Kowatsch

DOI
https://doi.org/10.2196/32630
Journal volume & issue
Vol. 24, no. 4
p. e32630

Abstract

Read online

BackgroundThe working alliance refers to an important relationship quality between health professionals and clients that robustly links to treatment success. Recent research shows that clients can develop an affective bond with chatbots. However, few research studies have investigated whether this perceived relationship is affected by the social roles of differing closeness a chatbot can impersonate and by allowing users to choose the social role of a chatbot. ObjectiveThis study aimed at understanding how the social role of a chatbot can be expressed using a set of interpersonal closeness cues and examining how these social roles affect clients’ experiences and the development of an affective bond with the chatbot, depending on clients’ characteristics (ie, age and gender) and whether they can freely choose a chatbot’s social role. MethodsInformed by the social role theory and the social response theory, we developed a design codebook for chatbots with different social roles along an interpersonal closeness continuum. Based on this codebook, we manipulated a fictitious health care chatbot to impersonate one of four distinct social roles common in health care settings—institution, expert, peer, and dialogical self—and examined effects on perceived affective bond and usage intentions in a web-based lab study. The study included a total of 251 participants, whose mean age was 41.15 (SD 13.87) years; 57.0% (143/251) of the participants were female. Participants were either randomly assigned to one of the chatbot conditions (no choice: n=202, 80.5%) or could freely choose to interact with one of these chatbot personas (free choice: n=49, 19.5%). Separate multivariate analyses of variance were performed to analyze differences (1) between the chatbot personas within the no-choice group and (2) between the no-choice and the free-choice groups. ResultsWhile the main effect of the chatbot persona on affective bond and usage intentions was insignificant (P=.87), we found differences based on participants’ demographic profiles: main effects for gender (P=.04, ηp2=0.115) and age (P<.001, ηp2=0.192) and a significant interaction effect of persona and age (P=.01, ηp2=0.102). Participants younger than 40 years reported higher scores for affective bond and usage intentions for the interpersonally more distant expert and institution chatbots; participants 40 years or older reported higher outcomes for the closer peer and dialogical-self chatbots. The option to freely choose a persona significantly benefited perceptions of the peer chatbot further (eg, free-choice group affective bond: mean 5.28, SD 0.89; no-choice group affective bond: mean 4.54, SD 1.10; P=.003, ηp2=0.117). ConclusionsManipulating a chatbot’s social role is a possible avenue for health care chatbot designers to tailor clients’ chatbot experiences using user-specific demographic factors and to improve clients’ perceptions and behavioral intentions toward the chatbot. Our results also emphasize the benefits of letting clients freely choose between chatbots.