PLoS ONE (Jan 2023)

Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine.

  • John Twomey,
  • Didier Ching,
  • Matthew Peter Aylett,
  • Michael Quayle,
  • Conor Linehan,
  • Gillian Murphy

DOI
https://doi.org/10.1371/journal.pone.0291668
Journal volume & issue
Vol. 18, no. 10
p. e0291668

Abstract

Read online

Deepfakes are a form of multi-modal media generated using deep-learning technology. Many academics have expressed fears that deepfakes present a severe threat to the veracity of news and political communication, and an epistemic crisis for video evidence. These commentaries have often been hypothetical, with few real-world cases of deepfake's political and epistemological harm. The Russo-Ukrainian war presents the first real-life example of deepfakes being used in warfare, with a number of incidents involving deepfakes of Russian and Ukrainian government officials being used for misinformation and entertainment. This study uses a thematic analysis on tweets relating to deepfakes and the Russo-Ukrainian war to explore how people react to deepfake content online, and to uncover evidence of previously theorised harms of deepfakes on trust. We extracted 4869 relevant tweets using the Twitter API over the first seven months of 2022. We found that much of the misinformation in our dataset came from labelling real media as deepfakes. Novel findings about deepfake scepticism emerged, including a connection between deepfakes and conspiratorial beliefs that world leaders were dead and/or replaced by deepfakes. This research has numerous implications for future research, societal media platforms, news media and governments. The lack of deepfake literacy in our dataset led to significant misunderstandings of what constitutes a deepfake, showing the need to encourage literacy in these new forms of media. However, our evidence demonstrates that efforts to raise awareness around deepfakes may undermine trust in legitimate videos. Consequentially, news media and governmental agencies need to weigh the benefits of educational deepfakes and pre-bunking against the risks of undermining truth. Similarly, news companies and media should be careful in how they label suspected deepfakes in case they cause suspicion for real media.