IEEE Access (Jan 2024)

MMGANGuard: A Robust Approach for Detecting Fake Images Generated by GANs Using Multi-Model Techniques

  • Syed Ali Raza,
  • Usman Habib,
  • Muhammad Usman,
  • Adeel Ashraf Cheema,
  • Muhammad Sajid Khan

DOI
https://doi.org/10.1109/ACCESS.2024.3393842
Journal volume & issue
Vol. 12
pp. 104153 – 104164

Abstract

Read online

Recent advances in Generative Adversarial Networks (GANs) have produced synthetic images with high visual fidelity, making them nearly indistinguishable from human-created images. These synthetic images referred to as deepfakes, have become a major source of misinformation due to social media. Technology is advancing rapidly, so reliable methods for distinguishing real from fake images are needed. The current detection mechanisms require image forensics tools such as error level analysis (ELA), and clone detection to detect manipulated images. These approaches are limited because they require forensics expertise to use, are manual in application nature, and are unscalable, creating a need for a framework for a scalable tool that experts and non-experts can use to combat the spread of manipulated images and preserve digital visual information authenticity. We approach this problem with a multi-model ensemble framework using the transfer learning method to effectively detect fake images. The proposed approach named Multi-Model GAN Guard (MMGANGuard)integrates four models into an ensemble framework to identify GAN-generated image characteristics to improve deepfake detection. The Gram-Net architecture, ResNet50V2, and DenseNet201 models are used with co-occurrence matrices using transfer learning for MMGANGuard. Through comprehensive experiments, the proposed model demonstrates promising results in detecting the deepfake with high accuracy on the StyleGAN dataset. For automated detection of deepfake-generated images, the proposed model exceeded 97% accuracy, 98.5% TPR, 98.4% TPR, and 95.6% TPR in these evaluations, eliminating the need for manual assessment which is promising for future research in this domain.

Keywords