IEEE Access (Jan 2024)

Entropy-Boosted Adversarial Patch for Concealing Pedestrians in YOLO Models

  • Chih-Yang Lin,
  • Tun-Yu Huang,
  • Hui-Fuang Ng,
  • Wei-Yang Lin,
  • Isack Farady

DOI
https://doi.org/10.1109/ACCESS.2024.3371507
Journal volume & issue
Vol. 12
pp. 32772 – 32779

Abstract

Read online

In recent years, rapid advancements in hardware and deep learning technologies have paved the way for the extensive integration of image recognition and object detection into daily applications. As reliance on deep learning grows, so do concerns about the vulnerabilities of deep neural networks, emphasizing the need to address potential security issues. This research unveils the Entropy-boosted Loss, a novel loss function tailored to generate adversarial patches resembling potted plants. Specifically designed for the YOLOV2, YOLOV3, and YOLOV4 object detectors, these patches obscure the detectors’ ability to identify individuals. By enhancing the uncertainty in class probability, a person wearing an adversarial patch crafted using our proposed loss function becomes less identifiable by YOLO detectors, achieving the desired adversarial effect. This underscores the significance of comprehending the vulnerabilities of YOLO models to adversarial attacks, particularly for individuals aiming to obscure their presence from camera detection. Our experiments, conducted using the INRIA person dataset and under real-time network camera conditions, confirm the effectiveness of our method. Moreover, our technique demonstrates substantial success in virtual try-on environments.

Keywords