Automatika (Oct 2024)

Securing online integrity: a hybrid approach to deepfake detection and removal using Explainable AI and Adversarial Robustness Training

  • R. Uma Maheshwari,
  • B. Paulchamy

DOI
https://doi.org/10.1080/00051144.2024.2400640
Journal volume & issue
Vol. 65, no. 4
pp. 1517 – 1532

Abstract

Read online

As deepfake technology becomes increasingly sophisticated, the proliferation of manipulated images presents a significant threat to online integrity, requiring advanced detection and mitigation strategies. Addressing this critical challenge, our study introduces a pioneering approach that integrates Explainable AI (XAI) with Adversarial Robustness Training (ART) to enhance the detection and removal of deepfake content. The proposed methodology, termed XAI-ART, begins with the creation of a diverse dataset that includes both authentic and manipulated images, followed by comprehensive preprocessing and augmentation. We then employ Adversarial Robustness Training to fortify the deep learning model against adversarial manipulations. By incorporating Explainable AI techniques, our approach not only improves detection accuracy but also provides transparency in model decision-making, offering clear insights into how deepfake content is identified. Our experimental results underscore the effectiveness of XAI-ART, with the model achieving an impressive accuracy of 97.5% in distinguishing between genuine and manipulated images. The recall rate of 96.8% indicates that our model effectively captures the majority of deepfake instances, while the F1-Score of 97.5% demonstrates a well-balanced performance in precision and recall. Importantly, the model maintains high robustness against adversarial attacks, with a minimal accuracy reduction to 96.7% under perturbations.

Keywords