IEEE Access (Jan 2024)
Enhancing Security in Real-Time Video Surveillance: A Deep Learning-Based Remedial Approach for Adversarial Attack Mitigation
Abstract
This paper introduces an innovative methodology to disrupt deep-learning (DL) surveillance systems by implementing an adversarial framework strategy, inducing misclassification in live video objects and extending attacks to real-time models. Focusing on the vulnerability of image-categorization models, the study evaluates the effectiveness of face mask surveillance against adversarial threats. A real-time system, employing the ShuffleNet V1 transfer-learning algorithm, was trained on a Kaggle dataset for face mask detection accuracy. Using a white-box Fast Gradient Sign Method (FGSM) attack with epsilon at 0.13, the study successfully generated adversarial frames, deceiving the face mask detection system and prompting unintended video predictions. The findings highlight the risks posed by adversarial attacks on critical video surveillance systems, specifically those designed for face mask detection. The paper emphasizes the need for proactive measures to safeguard these systems before real-world deployment, crucial for ensuring their robustness and reliability in the face of potential adversarial threats.
Keywords