Sensors (Sep 2022)

Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems

  • Theodora Anastasiou,
  • Sophia Karagiorgou,
  • Petros Petrou,
  • Dimitrios Papamartzivanos,
  • Thanassis Giannetsos,
  • Georgia Tsirigotaki,
  • Jelle Keizer

DOI
https://doi.org/10.3390/s22186905
Journal volume & issue
Vol. 22, no. 18
p. 6905

Abstract

Read online

Adversarial machine learning (AML) is a class of data manipulation techniques that cause alterations in the behavior of artificial intelligence (AI) systems while going unnoticed by humans. These alterations can cause serious vulnerabilities to mission-critical AI-enabled applications. This work introduces an AI architecture augmented with adversarial examples and defense algorithms to safeguard, secure, and make more reliable AI systems. This can be conducted by robustifying deep neural network (DNN) classifiers and explicitly focusing on the specific case of convolutional neural networks (CNNs) used in non-trivial manufacturing environments prone to noise, vibrations, and errors when capturing and transferring data. The proposed architecture enables the imitation of the interplay between the attacker and a defender based on the deployment and cross-evaluation of adversarial and defense strategies. The AI architecture enables (i) the creation and usage of adversarial examples in the training process, which robustify the accuracy of CNNs, (ii) the evaluation of defense algorithms to recover the classifiers’ accuracy, and (iii) the provision of a multiclass discriminator to distinguish and report on non-attacked and attacked data. The experimental results show promising results in a hybrid solution combining the defense algorithms and the multiclass discriminator in an effort to revitalize the attacked base models and robustify the DNN classifiers. The proposed architecture is ratified in the context of a real manufacturing environment utilizing datasets stemming from the actual production lines.

Keywords