Social Media + Society (Jan 2023)
Developing Misinformation Immunity: How to Reason-Check Fallacious News in a Human–Computer Interaction Environment
Abstract
To counter the fake news phenomenon, the scholarly community has attempted to debunk and prebunk disinformation. However, misinformation still constitutes a major challenge due to the variety of misleading techniques and their continuous updates which call for the exercise of critical thinking to build resilience. In this study we present two open access chatbots, the Fake News Immunity Chatbot and the Vaccinating News Chatbot , which combine Fallacy Theory and Human–Computer Interaction to inoculate citizens and communication gatekeepers against misinformation. These chatbots differ from existing tools both in function and form. First, they target misinformation and enhance the identification of fallacious arguments; and second, they are multiagent and leverage discourse theories of persuasion in their conversational design. After having described both their backend and their frontend design, we report on the evaluation of the user interface and impact on users’ critical thinking skills through a questionnaire, a crowdsourced survey, and a pilot qualitative experiment. The results shed light on the best practices to design user-friendly active inoculation tools and reveal that the two chatbots are perceived as increasing critical thinking skills in the current misinformation ecosystem.