Applied Sciences (Mar 2023)

Robustness of Deep Learning Models for Vision Tasks

  • Youngseok Lee,
  • Jongweon Kim

DOI
https://doi.org/10.3390/app13074422
Journal volume & issue
Vol. 13, no. 7
p. 4422

Abstract

Read online

In recent years, artificial intelligence technologies in vision tasks have gradually begun to be applied to the physical world, proving they are vulnerable to adversarial attacks. Thus, the importance of improving robustness against adversarial attacks has emerged as an urgent issue in vision tasks. This article aims to provide a historical summary of the evolution of adversarial attacks and defense methods on CNN-based models and also introduces studies focusing on brain-inspired models that mimic the visual cortex, which is resistant to adversarial attacks. As the origination of CNN models was in the application of physiological findings related to the visual cortex of the time, new physiological studies related to the visual cortex provide an opportunity to create more robust models against adversarial attacks. The authors hope this review will promote interest and progress in artificially intelligent security by improving the robustness of deep learning models for vision tasks.

Keywords