IEEE Access (Jan 2024)

The Impact of Simultaneous Adversarial Attacks on Robustness of Medical Image Analysis

  • Shantanu Pal,
  • Saifur Rahman,
  • Maedeh Beheshti,
  • Ahsan Habib,
  • Zahra Jadidi,
  • Chandan Karmakar

DOI
https://doi.org/10.1109/ACCESS.2024.3396566
Journal volume & issue
Vol. 12
pp. 66478 – 66494

Abstract

Read online

Deep learning models are widely used in healthcare systems. However, deep learning models are vulnerable to attacks themselves. Significantly, due to the black-box nature of the deep learning model, it is challenging to detect attacks. Furthermore, due to data sensitivity, such adversarial attacks in healthcare systems are considered potential security and privacy threats. In this paper, we provide a comprehensive analysis of adversarial attacks on medical image analysis, including two adversary methods, FGSM and PGD, applied to an entire image or partial image. The partial attack comes in various sizes, either the individual or combinational format of attack. We use three medical datasets to examine the impact of the model’s accuracy and robustness. Finally, we provide a complete implementation of the attacks and discuss the results. Our results indicate the weakness and robustness of four deep learning models and exhibit how varying perturbations stimulate model behaviour regarding the specific area and critical features.

Keywords