Applied Computational Intelligence and Soft Computing (Jan 2024)

Design and Implement Deepfake Video Detection Using VGG-16 and Long Short-Term Memory

  • Laor Boongasame,
  • Jindaphon Boonpluk,
  • Sunisa Soponmanee,
  • Jirapond Muangprathub,
  • Karanrat Thammarak

DOI
https://doi.org/10.1155/2024/8729440
Journal volume & issue
Vol. 2024

Abstract

Read online

This study aims to design and implement deepfake video detection using VGG-16 in combination with long short-term memory (LSTM). In contrast to other studies, this study compares VGG-16, VGG-19, and the newest model, ResNet-101, including LSTM. All the models were tested using Celeb-DF video dataset. The result showed that the VGG-16 model with 15 epochs and 32 batch sizes had the highest performance. The results showed that the VGG-16 model with 15 epochs and 32 batch sizes exhibited the highest performance, with 96.25% accuracy, 93.04% recall, 99.20% specificity, and 99.07% precision. In conclusion, this model can be implemented practically.