Engineering, Technology & Applied Science Research (Jun 2024)

Enhancing Neural Network Resilence against Adversarial Attacks based on FGSM Technique

  • Mohamed Ben Ammar,
  • Refka Ghodhbani,
  • Taoufik Saidani

DOI
https://doi.org/10.48084/etasr.7479
Journal volume & issue
Vol. 14, no. 3

Abstract

Read online

The robustness and reliability of neural network architectures are put to the test by adversarial attacks, resulting in inaccurate findings and affecting the efficiency of applications operating on Internet of Things (IoT) devices. This study investigates the severe repercussions that might emerge from attacks on neural network topologies and their implications on embedded systems. In particular, this study investigates the degree to which a neural network trained in the MNIST dataset is susceptible to adversarial attack strategies such as FGSM. Experiments were conducted to evaluate the effectiveness of various attack strategies in compromising the accuracy and dependability of the network. This study also examines ways to improve the resilience of a neural network structure through the use of adversarial training methods, with particular emphasis on the APE-GAN approach. The identification of the vulnerabilities in neural networks and the development of efficient protection mechanisms can improve the security of embedded applications, especially those on IoT chips with limited resources.

Keywords