Cyber Security and Applications (Jan 2024)

A Secure Deepfake Mitigation Framework: Architecture, Issues, Challenges, and Societal Impact

  • Mohammad Wazid,
  • Amit Kumar Mishra,
  • Noor Mohd,
  • Ashok Kumar Das

Journal volume & issue
Vol. 2
p. 100040

Abstract

Read online

Deepfake refers to synthetic media generated through artificial intelligence (AI) techniques. It involves creating or altering video, audio, or images to make them appear as though they depict something or someone else. Deepfake technology advances just like the mechanisms that are used to detect them. There’s an ongoing cat-and-mouse game between creators of deepfakes and those developing detection methods. As the technology that underpins deepfakes continues to improve, we are obligated to confront the repercussions that it will have on society. The introduction of educational initiatives, regulatory frameworks, technical solutions, and ethical concerns are all potential avenues via which this matter can be addressed. Multiple approaches need to be combined to identify deepfakes effectively. Detecting deepfakes can be challenging due to their increasingly sophisticated nature, but several methods and techniques are being developed to identify them. Mitigating the negative impact of deepfakes involves a combination of technological advancements, awareness, and policy measures. In this paper, we propose a secure deepfake mitigation framework. We have also provided a security analysis of the proposed framework via the Scyhter tool-based formal security verification. It proves that the proposed framework is secure against various cyber attacks. We also discuss the societal impact of deepfake events along with its detection process. Then some AI models, which are used for creating and detecting the deepfake events, are highlighted. Ultimately, we provide the practical implementation of the proposed framework to observe its functioning in a real-world scenario.

Keywords