IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)

Generating Adversarial Examples Against Remote Sensing Scene Classification via Feature Approximation

  • Rui Zhu,
  • Shiping Ma,
  • Jiawei Lian,
  • Linyuan He,
  • Shaohui Mei

DOI
https://doi.org/10.1109/JSTARS.2024.3399780
Journal volume & issue
Vol. 17
pp. 10174 – 10187

Abstract

Read online

The existence of adversarial examples highlights the vulnerability of deep neural networks, which can change the recognition results by adding well-designed perturbations to the original image. It brings a great challenge to the remote sensing images (RSI) scene classification. RSI scene classification primarily relies on the spatial and texture feature information of images, making attacks in the feature domain more effective. In this study, we introduce the feature approximation (FA) strategy, which generates adversarial examples by approximating clean image features to virtual images that are designed to not belong to any category. Our research aims to attack image classification models that are trained with RSI and discover the common vulnerabilities of these models. Specifically, we benchmark the FA attack using both featureless images and images generated via data augmentation methods. We then extend the FA attack to multimodel FA (MFA), improving the transferability of the attack. Finally, we show that the FA strategy is also effective for targeted attacks by approximating the input clean image features to the target category image features. Extensive experiments on the remote sensing classification datasets UC Merced and AID demonstrate the effectiveness of the methods in this article. The FA attack exhibits remarkable attack performance. Furthermore, the proposed MFA attack outperforms the success rate achieved by existing advanced targetless black-box attacks by an average of over 15%. The FA attack also performs better compared to multiple existing targeted white-box attacks.

Keywords