IEEE Access (Jan 2024)
Adversarial Attacks to Manipulate Target Localization of Object Detector
Abstract
Adversarial attack has gradually become an important branch in the field of artificial intelligence security, where the potential threat brought by adversarial example attack is more not to be ignored. This paper proposes a new attack mode for the task of object detection. We find that by attacking the localization task in object detection, a kind of adversarial attack on target bounding boxes can be realized. We discover that, for a certain target in the input image, the areas concerned by the classification and localization of the object detection model are determined but different. Therefore, we propose a local perturbation based adversarial attack method for object detection localization, which identifies key areas affecting target localization and adds adversarial perturbations to these areas to achieve bounding box attacks on target bounding box localization while ensuring high stealthiness. Experimental results on MS COCO dataset and self-built dataset show that our method generates adversarial examples that can make the object detector locate abnormally. More importantly, studying adversarial example attack is beneficial to understanding deep networks and developing robust models.
Keywords