JMIR Human Factors (Jan 2023)

Usability and Credibility of a COVID-19 Vaccine Chatbot for Young Adults and Health Workers in the United States: Formative Mixed Methods Study

  • Rose Weeks,
  • Pooja Sangha,
  • Lyra Cooper,
  • João Sedoc,
  • Sydney White,
  • Shai Gretz,
  • Assaf Toledo,
  • Dan Lahav,
  • Anna-Maria Hartner,
  • Nina M Martin,
  • Jae Hyoung Lee,
  • Noam Slonim,
  • Naor Bar-Zeev

DOI
https://doi.org/10.2196/40533
Journal volume & issue
Vol. 10
p. e40533

Abstract

Read online

BackgroundThe COVID-19 pandemic raised novel challenges in communicating reliable, continually changing health information to a broad and sometimes skeptical public, particularly around COVID-19 vaccines, which, despite being comprehensively studied, were the subject of viral misinformation. Chatbots are a promising technology to reach and engage populations during the pandemic. To inform and communicate effectively with users, chatbots must be highly usable and credible. ObjectiveWe sought to understand how young adults and health workers in the United States assessed the usability and credibility of a web-based chatbot called Vira, created by the Johns Hopkins Bloomberg School of Public Health and IBM Research using natural language processing technology. Using a mixed method approach, we sought to rapidly improve Vira’s user experience to support vaccine decision-making during the peak of the COVID-19 pandemic. MethodsWe recruited racially and ethnically diverse young people and health workers, with both groups from urban areas of the United States. We used the validated Chatbot Usability Questionnaire to understand the tool’s navigation, precision, and persona. We also conducted 11 interviews with health workers and young people to understand the user experience, whether they perceived the chatbot as confidential and trustworthy, and how they would use the chatbot. We coded and categorized emerging themes to understand the determining factors for participants’ assessment of chatbot usability and credibility. ResultsIn all, 58 participants completed a web-based usability questionnaire and 11 completed in-depth interviews. Most questionnaire respondents said the chatbot was “easy to navigate” (51/58, 88%) and “very easy to use” (50/58, 86%), and many (45/58, 78%) said its responses were relevant. The mean Chatbot Usability Questionnaire score was 70.2 (SD 12.1) and scores ranged from 40.6 to 95.3. Interview participants felt the chatbot achieved high usability due to its strong functionality, performance, and perceived confidentiality and that the chatbot could attain high credibility with a redesign of its cartoonish visual persona. Young people said they would use the chatbot to discuss vaccination with hesitant friends or family members, whereas health workers used or anticipated using the chatbot to support community outreach, save time, and stay up to date. ConclusionsThis formative study conducted during the pandemic’s peak provided user feedback for an iterative redesign of Vira. Using a mixed method approach provided multidimensional feedback, identifying how the chatbot worked well—being easy to use, answering questions appropriately, and using credible branding—while offering tangible steps to improve the product’s visual design. Future studies should evaluate how chatbots support personal health decision-making, particularly in the context of a public health emergency, and whether such outreach tools can reduce staff burnout. Randomized studies should also be conducted to measure how chatbots countering health misinformation affect user knowledge, attitudes, and behavior.