Informatika (Sep 2019)

Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition

  • D. M. Voynov,
  • V. A. Kovalev

Journal volume & issue
Vol. 16, no. 3
pp. 14 – 22

Abstract

Read online

This paper addresses the problem of dependence of the success rate of adversarial attacks to the deep neural networks on the biomedical image type and control parameters of generation of adversarial examples. With this work we are going to contribute towards accumulation of experimental results on adversarial attacks for the community dealing with biomedical images. The white-box Projected Gradient Descent attacks were examined based on 8 classification tasks and 13 image datasets containing more than 900 000 chest X-ray and histology images of malignant tumors. An increase of the amplitude and the number of iterations of adversarial perturbations in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study. Histology images tend to be less sensitive to the growth of amplitude of adversarial perturbations. It was found that the success of attacks was dropping dramatically when the original confidence of predicting image class exceeded 0,95.

Keywords