Applied Sciences (Feb 2022)

AT-BOD: An Adversarial Attack on Fool DNN-Based Blackbox Object Detection Models

  • Ilham A. Elaalami,
  • Sunday O. Olatunji,
  • Rachid M. Zagrouba

DOI
https://doi.org/10.3390/app12042003
Journal volume & issue
Vol. 12, no. 4
p. 2003

Abstract

Read online

Object recognition is a fundamental concept in computer vision. Object detection models have recently played a vital role in various applications, including real-time and safety-critical systems such as camera surveillance and self-driving cars. Scientific research has proven that object detection models are prone to adversarial attacks. Although several proposed methods exist throughout the literature, they either target white-box models or specific-task black-box models and do not generalize on other detectors. In this paper, we proposed a new adversarial attack against Blackbox-based object detectors called AT-BOD. The proposed AT-BOD model can fool the single-stage and multi-stage detectors, where we used an optimization algorithm to generate adversarial examples depending only on the detector predictions. AT-BOD model works in two diverse ways, reducing the confidence score and misleading the model to make the wrong decision or hide the object detection models. Our solution achieved a fooling rate of 97% and a false negative increase of 99% on the YOLOv3 detector, and a fooling rate of 61% false-negative increase of 57% on the Faster R-CNN detector. The detection accuracy of YOLOv3 and Faster R-CNN under AT-BOD was dramatically reduced and reached ≤1% and ≤3%, respectively.

Keywords