IEEE Access (Jan 2024)

A Systematic Review of Adversarial Machine Learning Attacks, Defensive Controls, and Technologies

  • Jasmita Malik,
  • Raja Muthalagu,
  • Pranav M. Pawar

DOI
https://doi.org/10.1109/ACCESS.2024.3423323
Journal volume & issue
Vol. 12
pp. 99382 – 99421

Abstract

Read online

Adversarial machine learning (AML) attacks have become a major concern for organizations in recent years, as AI has become the industry’s focal point and GenAI applications have grown in popularity around the world. Organizations are eager to invest in GenAI applications and develop their own large language models, but they face numerous security and data privacy issues, particularly AML attacks. AML attacks have jeopardized numerous large-scale machine learning models. If carried out successfully, AML attacks can significantly reduce the efficiency and precision of machine learning models. They have far-reaching negative consequences in the context of critical healthcare and autonomous transportation systems. In this paper, AML attacks are identified, analyzed, and classified using adversarial tactics and techniques. This research also recommends open-source tools for testing AI and ML models against AML attacks. Furthermore, this research suggests specific mitigating measures against each attack. It aims to serve as a guidance for organizations to defend against AML attacks and gain assurance in the security of ML models.

Keywords