Complex & Intelligent Systems (May 2024)

ATTACK-COSM: attacking the camouflaged object segmentation model through digital world adversarial examples

  • Qiaoyi Li,
  • Zhengjie Wang,
  • Xiaoning Zhang,
  • Yang Li

DOI
https://doi.org/10.1007/s40747-024-01455-7
Journal volume & issue
Vol. 10, no. 4
pp. 5445 – 5457

Abstract

Read online

Abstract The camouflaged object segmentation model (COSM) has recently gained substantial attention due to its remarkable ability to detect camouflaged objects. Nevertheless, deep vision models are widely acknowledged to be susceptible to adversarial examples, which can mislead models, causing them to make incorrect predictions through imperceptible perturbations. The vulnerability to adversarial attacks raises significant concerns when deploying COSM in security-sensitive applications. Consequently, it is crucial to determine whether the foundational vision model COSM is also susceptible to such attacks. To our knowledge, our work represents the first exploration of strategies for targeting COSM with adversarial examples in the digital world. With the primary objective of reversing the predictions for both masked objects and backgrounds, we explore the adversarial robustness of COSM in full white-box and black-box settings. In addition to the primary objective of reversing the predictions for masked objects and backgrounds, our investigation reveals the potential to generate any desired mask through adversarial attacks. The experimental results indicate that COSM demonstrates weak robustness, rendering it vulnerable to adversarial example attacks. In the realm of COS, the projected gradient descent (PGD) attack method exhibits superior attack capabilities compared to the fast gradient sign (FGSM) method in both white-box and black-box settings. These findings reduce the security risks in the application of COSM and pave the way for multiple applications of COSM.

Keywords