IEEE Access (Jan 2024)

Adversarial Examples for Image Cropping: Gradient-Based and Bayesian-Optimized Approaches for Effective Adversarial Attack

  • Masatomo Yoshida,
  • Haruto Namura,
  • Masahiro Okuda

DOI
https://doi.org/10.1109/ACCESS.2024.3415356
Journal volume & issue
Vol. 12
pp. 86541 – 86552

Abstract

Read online

In this study, we propose novel approaches for generating adversarial examples targeting machine learning-based image cropping systems. Image cropping is crucial for meeting display space restrictions and highlighting content’s interest areas. However, existing image cropping systems often miss user-intended areas, have necessities to remove inherent biases in light of AI fairness, or might expose users to legal risks. To address these issues, our paper introduces approaches for effectively creating adversarial examples in both black-box and white-box settings. In the white-box approach, we utilize gradient-based perturbations focusing on the model’s blurring layer and targeting effective areas. For the black-box approach, even for models where gradient information is unavailable, we levered pixel attacks with Bayesian optimization and patch attacks to effectively narrow the search space. We also introduce a novel quantitative evaluation method for image cropping by measuring shifts in gaze saliency map peak values, reflecting a typical scenario with social network services. Our results suggest that our approaches not only outperform existing methods but also exhibit the potential to be an effective solution to the problems even with models on actual platforms.

Keywords