IEEE Access (Jan 2023)

Evaluation of Model Quantization Method on Vitis-AI for Mitigating Adversarial Examples

  • Yuta Fukuda,
  • Kota Yoshida,
  • Takeshi Fujino

DOI
https://doi.org/10.1109/ACCESS.2023.3305264
Journal volume & issue
Vol. 11
pp. 87200 – 87209

Abstract

Read online

Adversarial examples (AEs) are typical model evasion attacks and security threats in deep neural networks (DNNs). One of the countermeasures is adversarial training (AT), and it trains DNNs by using a training dataset containing AEs to achieve robustness against AEs. On the other hand, the robustness obtained by AT greatly decreases when its parameters are quantized from a 32-bit float into an 8-bit integer to execute DNNs on edge devices with restricted hardware resources. Preliminary experiments in this study show that robustness is reduced by the fine-tuning process, in which a quantized model is trained with clean samples to reduce quantization errors. We propose quantization-aware adversarial training (QAAT) to address this problem, optimizing DNNs by conducting AT in quantization flow. In this study, we constructed a QAAT model using Vitis-AI provided by Xilinx. We implemented the QAAT model on the evaluation board ZCU104, equipped with Zynq UltraScale+, and demonstrate the robustness against AEs.

Keywords