IEEE Access (Jan 2021)
Conditional Generative Adversarial Network-Based Image Denoising for Defending Against Adversarial Attack
Abstract
Deep learning has become one of the most popular research topics today. Researchers have developed cutting-edge learning algorithms and frameworks around deep learning, applying them to a wide range of fields to solve real-world problems. However, we are more concerned about the security risks associated with deep learning models, such as adversarial attacks, which this article will discuss. Attackers can use the deep learning model to create the conditions for an attack, maliciously manipulating the input images to deceive the classification model and produce false positives. This paper proposes a method of pre-denoising all input images to prevent adversarial attacks by adding a purification layer before the classification model. The method in this paper is proposed based on the basic architecture of Conditional Generative Adversarial Networks. It adds the image perception loss to the original algorithm Pix2pix to achieve more efficient image recovery. Our method can recover noise-attacked images to a level close to the actual image to ensure the correctness of the classification results. Experimental results show that our approach can quickly recover noisy images, and the recovery accuracy is 20.22% higher than the previous state-of-the-art.
Keywords