IEEE Access (Jan 2023)

Regularizing Label-Augmented Generative Adversarial Networks Under Limited Data

  • Liang Hou

DOI
https://doi.org/10.1109/ACCESS.2023.3259066
Journal volume & issue
Vol. 11
pp. 28966 – 28976

Abstract

Read online

Training generative adversarial networks (GANs) using limited training data is challenging since the original discriminator is prone to overfitting. The recently proposed label augmentation technique complements categorical data augmentation approaches for discriminator, showing improved data efficiency in training GANs but lacks a theoretical basis. In this paper, we propose a novel regularization approach for the label-augmented discriminator to further improve the data efficiency of training GANs with a theoretical basis. Specifically, the proposed regularization adaptively constrains the predictions of the label-augmented discriminator on generated data to be close to the moving averages of its historical predictions on real data, and vice versa. We theoretically establish a connection between the objective function with the proposed regularization and a $f$ -divergence that is more robust than the previous reversed Kullback-Leibler divergence. Experimental results on various datasets and diverse architectures show the significantly improved data efficiency of our proposed method compared to state-of-the-art data-efficient GAN training approaches for training GANs under limited training data regimes.

Keywords