Applied Sciences (Sep 2024)

Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models

  • Ziying Yang,
  • Jie Zhang,
  • Wei Wang,
  • Huan Li

DOI
https://doi.org/10.3390/app14198742
Journal volume & issue
Vol. 14, no. 19
p. 8742

Abstract

Read online

Deep Generative Models (DGMs), as a state-of-the-art technology in the field of artificial intelligence, find extensive applications across various domains. However, their security concerns have increasingly gained prominence, particularly with regard to invisible backdoor attacks. Currently, most backdoor attack methods rely on visible backdoor triggers that are easily detectable and defendable against. Although some studies have explored invisible backdoor attacks, they often require parameter modifications and additions to the model generator, resulting in practical inconveniences. In this study, we aim to overcome these limitations by proposing a novel method for invisible backdoor attacks. We employ an encoder–decoder network to ‘poison’ the data during the preparation stage without modifying the model itself. Through meticulous design, the trigger remains visually undetectable, substantially enhancing attacker stealthiness and success rates. Consequently, this attack method poses a serious threat to the security of DGMs while presenting new challenges for security mechanisms. Therefore, we urge researchers to intensify their investigations into DGM security issues and collaboratively promote the healthy development of DGM security.

Keywords