International Journal of Cognitive Computing in Engineering (Jun 2022)

Exploring generative adversarial networks and adversarial training

  • Afia Sajeeda,
  • B M Mainul Hossain, Ph.D

Journal volume & issue
Vol. 3
pp. 78 – 89

Abstract

Read online

Recognized as a realistic image generator, Generative Adversarial Network (GAN) occupies a progressive section in deep learning. Using generative modeling, the underlying generator model learns the real target distribution and outputs fake samples from the generated replica distribution. The discriminator attempts to distinguish the fake and the real samples and sends feedback to the generator so that the generator can improve the fake samples. Recently, GANs have been competing with the state-of-the-art in various tasks including image processing, missing data imputation, text-to-image translation and adversarial example generation. However, the architecture suffers from training instability, resulting in problems like non-convergence, mode collapse and vanishing gradients. The research community has been studying and devising modified architectures, alternative loss functions and techniques to address these concerns. A section of publications has studied Adversarial Training, alongside GANs. This review covers the existing works on the instability of GANs from square one and a portion of recent publications to illustrate the trend of research. It also gives insight on studies exploring adversarial attacks and research discussing Adversarial Attacks with GANs. To put it more eloquently, this study intends to guide researchers interested in studying improvisations made to GANs for stable training, in the presence of Adversarial Attacks.

Keywords