网络与信息安全学报 (Aug 2023)

Survey on adversarial attacks and defense of face forgery and detection

  • Shiyu HUANG, Feng YE, Tianqiang HUANG, Wei LI, Liqing HUANG, Haifeng LUO

DOI
https://doi.org/10.11959/j.issn.2096-109x.2023049
Journal volume & issue
Vol. 9, no. 4
pp. 1 – 15

Abstract

Read online

Face forgery and detection has become a research hotspot.Face forgery methods can produce fake face images and videos.Some malicious videos, often targeting celebrities, are widely circulated on social networks, damaging the reputation of victims and causing significant social harm.As a result, it is crucial to develop effective detection methods to identify fake videos.In recent years, deep learning technology has made the task of face forgery and detection more accessible.Deep learning-based face forgery methods can generate highly realistic faces, while deep learning-based fake face detection methods demonstrate higher accuracy compared to traditional approaches.However, it has been shown that deep learning models are vulnerable to adversarial examples, which can lead to a degradation in performance.Consequently, games involving adversarial examples have emerged in the field of face forgery and detection, adding complexity to the original task.Both fakers and detectors now need to consider the adversarial security aspect of their methods.The combination of deep learning methods and adversarial examples is thus the future trend in this research field, particularly with a focus on adversarial attack and defense in face forgery and detection.The concept of face forgery and detection and the current mainstream methods were introduced.Classic adversarial attack and defense methods were reviewed.The application of adversarial attack and defense methods in face forgery and detection was described, and the current research trends were analyzed.Moreover, the challenges of adversarial attack and defense for face forgery and detection were summarized, and future development directions were discussed.

Keywords