Visual Intelligence (Dec 2024)

Patch is enough: naturalistic adversarial patch against vision-language pre-training models

  • Dehong Kong,
  • Siyuan Liang,
  • Xiaopeng Zhu,
  • Yuansheng Zhong,
  • Wenqi Ren

DOI
https://doi.org/10.1007/s44267-024-00066-7
Journal volume & issue
Vol. 2, no. 1
pp. 1 – 10

Abstract

Read online

Abstract Visual language pre-training (VLP) models have demonstrated significant success in various domains, but they remain vulnerable to adversarial attacks. Addressing these adversarial vulnerabilities is crucial for enhancing security in multi-modal learning. Traditionally, adversarial methods that target VLP models involve simultaneous perturbation of images and text. However, this approach faces significant challenges. First, adversarial perturbations often fail to translate effectively into real-world scenarios. Second, direct modifications to the text are conspicuously visible. To overcome these limitations, we propose a novel strategy that uses only image patches for attacks, thus preserving the integrity of the original text. Our method leverages prior knowledge from diffusion models to enhance the authenticity and naturalness of the perturbations. Moreover, to optimize patch placement and improve the effectiveness of our attacks, we utilize the cross-attention mechanism, which encapsulates inter-modal interactions by generating attention maps to guide strategic patch placement. Extensive experiments conducted in a white-box setting for image-to-text scenarios reveal that our proposed method significantly outperforms existing techniques, achieving a 100% attack success rate.

Keywords