Discover Artificial Intelligence (Oct 2022)

An improved real time detection of data poisoning attacks in deep learning vision systems

  • Vijay Raghavan,
  • Thomas Mazzuchi,
  • Shahram Sarkani

DOI
https://doi.org/10.1007/s44163-022-00035-3
Journal volume & issue
Vol. 2, no. 1
pp. 1 – 17

Abstract

Read online

Abstract The practice of using deep learning methods in safety critical vision systems such as autonomous driving has come a long way. As vision systems supported by deep learning methods become ubiquitous, the possible security threats faced by these systems have come into greater focus. As it is with any artificial intelligence system, these deep neural vision networks are first trained on a data set of interest, once they start performing well, they are deployed to a real-world environment. In the training stage, deep learning systems are susceptible to data poisoning attacks. While deep neural networks have proved to be versatile in solving a host of challenges. These systems have complex data ecosystems especially in computer vision. In practice, the security threats when training these systems are often ignored while deploying these models in the real world. However, these threats pose significant risks to the overall reliability of the system. In this paper, we present the fundamentals of data poisoning attacks when training deep learning vision systems and discuss countermeasures against these types of attacks. In addition, we simulate the risk posed by a real-world data poisoning attack on a deep learning vision system and present a novel algorithm MOVCE—Model verification with Convolutional Neural Network and Word Embeddings which provides an effective countermeasure for maintaining the reliability of the system. The countermeasure described in this paper can be used on a wide variety of use cases where the risks posed by poisoning the training data are similar.

Keywords