IEEE Access (Jan 2024)

Boosting Deep Feature Fusion-Based Detection Model for Fake Faces Generated by Generative Adversarial Networks for Consumer Space Environment

  • Fadwa Alrowais,
  • Asma Abbas Hassan,
  • Wafa Sulaiman Almukadi,
  • Meshari H. Alanazi,
  • Radwa Marzouk,
  • Ahmed Mahmud

DOI
https://doi.org/10.1109/ACCESS.2024.3470128
Journal volume & issue
Vol. 12
pp. 147680 – 147693

Abstract

Read online

In the consumer space, deep fakes refer to highly realistic, AI-generated images, audio, or videos that mimic real people generated by cutting-edge technologies such as Generative Adversarial Networks (GANs). In the digital age, recognizing and detecting deepfakes is a critical problem. The most common solutions for deepfake creation are those based on GANs, which can efficiently manipulate multimedia data or create from scratch. GANs comprise two neural networks, a Generator (G) and a Discriminator (D), that concurrently work during competition. The generator generates artificial data, whereas the discriminator calculates the authenticity of generated and real data. This adversarial procedure causes the generator to generate more realistic content. Identifying deep fakes produced by GANs using deep learning (DL) includes leveraging complex neural networks to detect subtle anomalies and artefacts that GANs accidentally introduce. Convolutional Neural Network (CNN) are very effective for these tasks, as they learn to discern inconsistencies and complex features in image textures, lighting, and facial features frequently missed by human eyes. This CNN model is trained on a massive database of fake and authentic images, allowing them to detect minor defects. This study presents a Deep Feature Fusion-based Fake Face Detection Generated by Generative Adversarial Networks (DF4D-GGAN) technique for Consumer Space Environment. The goal of the DF4D-GGAN technique is to detect the presence of real or deepfake images generated by DL. In the DF4D-GGAN technique, the Gaussian filtering (GF) approach is used for preprocessing the input images. Besides, the feature fusion process uses EfficientNet-b4 and ShuffleNet. Moreover, the hyperparameter selection of the DL models is performed by an improved slime mould algorithm (ISMA). Finally, an extreme learning machine (ELM) classifier has been employed to proficiently recognize real and fake images. To validate the results of the DF4D-GGAN technique, a series of simulations were made on benchmark datasets. The results stated that the DF4D-GGAN technique gains improved results over other models.

Keywords