IEEE Access (Jan 2022)

A Survey on Efficient Methods for Adversarial Robustness

  • Awais Muhammad,
  • Sung-Ho Bae

DOI
https://doi.org/10.1109/ACCESS.2022.3216291
Journal volume & issue
Vol. 10
pp. 118815 – 118830

Abstract

Read online

Deep learning has revolutionized computer vision with phenomenal success and widespread applications. Despite impressive results in complex problems, neural networks are susceptible to adversarial attacks: small and imperceptible changes in input space that lead these models to incorrect outputs. Adversarial attacks have raised serious concerns, and robustness to these attacks has become a vital issue. Adversarial training, a min-max optimization approach, has shown promise against these attacks. The computational cost of adversarial training, however, makes it prohibitively difficult to scale as well as to be useful in practice. Recently, several works have explored different approaches to make adversarial training computationally more affordable. This paper presents a comprehensive survey on efficient adversarial robustness methods with an aim to present a holistic outlook to make future exploration more systematic and exhaustive. We start by mathematically defining fundamental ideas in adversarially robust learning. We then divide these approaches into two categories based on underlying mechanisms: methods that modify initial adversarial training and techniques that leverage transfer learning to improve efficiency. Finally, based on this overview, we analyze and present an outlook of future directions.

Keywords