Jisuanji kexue (Oct 2021)
Feature Transformation for Defending Adversarial Attack on Image Retrieval
Abstract
The adversarial attack is firstly studied in image classification to generate imperceptible perturbations that can mislead the prediction of a convolutional neural network.Recently,it has also been extensively explored in image retrieval and shows that the popular image retrieval models are undoubtedly vulnerable to return irrelevant images to the query image with small perturbations.In particular,landmark image retrieval is a research hotspot of image retrieval as an explosive volume of landmark images are uploaded on the Internet by people using various smart devices when taking tours in cities.This paper makes the first trail to investigate the defending approach against adversarial attacks on city landmark image retrieval models without training.Specifica-lly,we propose to perform image feature transformation at inference time to eliminate the adversarial effects based on the basic image features.Our method explores four feature transformation schemes:resize,padding,total variance minimization and image quilting,which are performed on a query image before feeding it to a retrieval model.Our defense method has the following advantages:1) no fine-tuning and incremental training procedure is required,2) very few additional computations and 3) flexible ensembles of multiple schemes.Extensive experiments show that the proposed transformation strategies are advanced at defending the existing adversarial attacks performed on the state-of-the-art city landmark image retrieval models.
Keywords