Applied Sciences (Nov 2021)

A Generative Adversarial Network Structure for Learning with Small Numerical Data Sets

  • Der-Chiang Li,
  • Szu-Chou Chen,
  • Yao-San Lin,
  • Kuan-Cheng Huang

DOI
https://doi.org/10.3390/app112210823
Journal volume & issue
Vol. 11, no. 22
p. 10823

Abstract

Read online

In recent years, generative adversarial networks (GANs) have been proposed to generate simulated images, and some works of literature have applied GAN to the analysis of numerical data in many fields, such as the prediction of building energy consumption and the prediction and identification of liver cancer stages. However, these studies are based on sufficient data volume. In the current era of globalization, the demand for rapid decision-making is increasing, but the data available in a short period of time is scarce. As a result, machine learning may not provide precise results. Obtaining more information from a small number of samples has become an important issue. Therefore, this study aimed to modify the generative adversarial network structure for learning with small numerical datasets, starting with the Wasserstein GAN (WGAN) as the GAN architecture, and using mega-trend-diffusion (MTD) to limit the bound of virtual samples that the GAN generates. The model verification of our proposed structure was conducted with two datasets in the UC Irvine Machine Learning Repository, and the performance was evaluated using three criteria: accuracy, standard deviation, and p-value. The experiment result shows that, using this improved GAN architecture (WGAN_MTD), small sample data can also be used to generate virtual samples that are similar to real samples through GAN.

Keywords