Jisuanji kexue (Feb 2022)

Survey of Research Progress on Adversarial Examples in Images

  • CHEN Meng-xuan, ZHANG Zhen-yong, JI Shou-ling, WEI Gui-yi, SHAO Jun

DOI
https://doi.org/10.11896/jsjkx.210800087
Journal volume & issue
Vol. 49, no. 2
pp. 92 – 106

Abstract

Read online

With the development of deep learning theory,deep neural network has made a series of breakthrough progress and has been widely applied in various fields.Among them,applications in the image field such as image classification are the most popular.However,research suggests that deep neural network has many security risks,especially the threat from adversarial examples,which seriously hinder the application of image classification.To address this challenge,many research efforts have recently been dedicated to adversarial examples in images,and a large number of research results have come out.This paper first introduces the relative concepts and terms of adversarial examples in images,reviews the adversarial attack methodsand defense me-thods based on the existing research results.In particular,it classifies them according to the attacker's ability and the train of thought in defense methods.This paper also analyzes the characteristics and the connections of different categories.Secondly,it briefly describes the adversarial attacks in the physical world.In the end,it discusses the challenges of adversarial examples in images and the potential future research directions.

Keywords