IEEE Access (Jan 2024)

Collaborative-GAN: An Approach for Stabilizing the Training Process of Generative Adversarial Network

  • Mohammed Megahed,
  • Ammar Mohammed

DOI
https://doi.org/10.1109/ACCESS.2024.3457902
Journal volume & issue
Vol. 12
pp. 138716 – 138735

Abstract

Read online

Generative Adversarial Network (GAN) outperforms its peers in the generative models’ family and is widely used to generate realistic samples in various domains. The basic idea of GAN is a competition between two networks called a generator and discriminator. Throughout the training process of GAN, the two networks face various challenges that affect the quality and diversity of the generated samples of GAN. These challenges include training instability and mode collapse problem. Training instability happens due to the variance of the performance between the generator and discriminator. The mode collapse, on the other hand, happens when the generator is stuck to generate diverse samples. One of the promising techniques that might overcome these issues and increase the networks’ performance is transfer learning between discriminators as same as generators. In this regard, the contribution of this paper is fourfold. First, it proposes a novel approach called Collaborative-GAN based on transfer learning to mitigate the training instability and tackle the mode collapse issues. In the proposed approach, the well-performer network transfers its learned weights to the low-performer ones based on a periodical evaluation during the training process. Second, the paper proposes a novel method to evaluate the discriminators’ performance based on a fuzzy inference system. Third, the paper proposes a method to evaluate the generators’ performance based on a series of detected FID scores that measure the diversity of the generated samples every certain intervals during the training process. We apply the proposed approach on two different architectures of GAN, which we called Single-GAN and Dual-GANs. In Single-GAN, the weights are transferred between the identical networks within the same GAN model. In Dual-GANs, on the other hand, the weights are transferred between identical networks across different GAN models. Thus, the paper introduces two types of transfer learning for GANs; inter and intra-transfer learning based on the paradigm of GAN architecture as a fourth contribution. We validate the proposed approach on three different benchmarks representing CelebA, Cifar-10, and Fashion-Mnist. The experimental results indicate that the proposed approach outperforms the state-of-the-art GAN models in terms of FID metric that measures the generated sample diversity. It is worth noting that the proposed approach achieved remarkable FID scores of 11.44, 24.19, and 11.21 on the Fashion-Mnist, Cifar-10, and CelebA datasets respectively.

Keywords