IEEE Access (Jan 2024)

Defending CNN Against FGSM Attacks Using Beta-Based Personalized Activation Functions and Adversarial Training

  • Hanen Issaoui,
  • Asma Eladel,
  • Ahmed Zouinkhi,
  • Mourad Zaied,
  • Lazhar Khriji,
  • Sarvar Hussain Nengroo

DOI
https://doi.org/10.1109/ACCESS.2024.3432773
Journal volume & issue
Vol. 12
pp. 138341 – 138350

Abstract

Read online

Machine learning algorithms based on deep neural networks have been widely used in many fields especially in computer vision, with impressive results. However, these models are vulnerable to different types of attacks like adversarial ones, which require attention to the model security and confidentiality. This study proposes a defense strategy to improve the insurance of white-box models while minimizing adversarial attacks against Fast Gradient Sign Method (FGSM)-based attacks and tackling the issue of adversarial training to improve their robustness. Mainly, we proposed a CNN model based on personalized activation functions using Beta function and its primitive. Then, the new resulted low degree polynomials are used to approximate the ReLU, Sigmoid and Tanh activation functions. Batch normalization was evoked to significantly improve the learning capacity of neural networks. The obtained results, using Mnist dataset prove the effectiveness of the proposed model.

Keywords