Heliyon (Sep 2024)
Detection of real-time deep fakes and face forgery in video conferencing employing generative adversarial networks
Abstract
As facial modification technology advances rapidly, it poses a challenge to methods used to detect fake faces. The advent of deep learning and AI-based technologies has led to the creation of counterfeit photographs that are more difficult to discern apart from real ones. Existing Deep fake detection systems excel at spotting fake content with low visual quality and are easily recognized by visual artifacts. The study employed a unique active forensic strategy Compact Ensemble-based discriminators architecture using Deep Conditional Generative Adversarial Networks (CED-DCGAN), for identifying real-time deep fakes in video conferencing. DCGAN focuses on video-deep fake detection on features since technologies for creating convincing fakes are improving rapidly. As a first step towards recognizing DCGAN-generated images, split real-time video images into frames containing essential elements and then use that bandwidth to train an ensemble-based discriminator as a classifier. Spectra anomalies are produced by up-sampling processes, standard procedures in GAN systems for making large amounts of fake data films. The Compact Ensemble discriminator (CED) concentrates on the most distinguishing feature between the natural and synthetic images, giving the generators a robust training signal. As empirical results on publicly available datasets show, the suggested algorithms outperform state-of-the-art methods and the proposed CED-DCGAN technique successfully detects high-fidelity deep fakes in video conferencing and generalizes well when comparing with other techniques. Python tool is used for implementing this proposed study and the accuracy obtained for proposed work is 98.23 %.