IEEE Access (Jan 2024)

Enhancing Security in Real-Time Video Surveillance: A Deep Learning-Based Remedial Approach for Adversarial Attack Mitigation

  • Gyana Ranjana Panigrahi,
  • Prabira Kumar Sethy,
  • Santi Kumari Behera,
  • Manoj Gupta,
  • Farhan A. Alenizi,
  • Aziz Nanthaamornphong

DOI
https://doi.org/10.1109/ACCESS.2024.3418614
Journal volume & issue
Vol. 12
pp. 88913 – 88926

Abstract

Read online

This paper introduces an innovative methodology to disrupt deep-learning (DL) surveillance systems by implementing an adversarial framework strategy, inducing misclassification in live video objects and extending attacks to real-time models. Focusing on the vulnerability of image-categorization models, the study evaluates the effectiveness of face mask surveillance against adversarial threats. A real-time system, employing the ShuffleNet V1 transfer-learning algorithm, was trained on a Kaggle dataset for face mask detection accuracy. Using a white-box Fast Gradient Sign Method (FGSM) attack with epsilon at 0.13, the study successfully generated adversarial frames, deceiving the face mask detection system and prompting unintended video predictions. The findings highlight the risks posed by adversarial attacks on critical video surveillance systems, specifically those designed for face mask detection. The paper emphasizes the need for proactive measures to safeguard these systems before real-world deployment, crucial for ensuring their robustness and reliability in the face of potential adversarial threats.

Keywords