Vietnam Journal of Computer Science (Aug 2022)

Synthetic Traffic Sign Image Generation Applying Generative Adversarial Networks

  • Christine Dewi,
  • Rung-Ching Chen,
  • Yan-Ting Liu

DOI
https://doi.org/10.1142/S2196888822500191
Journal volume & issue
Vol. 09, no. 03
pp. 333 – 348

Abstract

Read online

Recently, it was shown that convolutional neural networks (CNNs) with suitably annotated training data and results produce the best traffic sign detection (TSD) and recognition (TSR). The whole system’s efficiency is determined by the data collecting process based on neural networks. As a result, the datasets for traffic signs in most nations throughout the globe are difficult to recognize because of their diversity. To address this problem, we must create a synthetic image to enhance our dataset. We apply deep convolutional generative adversarial networks (DCGAN) and Wasserstein generative adversarial networks (Wasserstein GAN, WGAN) to generate realistic and diverse additional training images to compensate for the original image distribution’s data shortage. This study focuses on the consistency of DCGAN and WGAN images created with varied settings. We utilize an actual picture with various numbers and scales for training. Additionally, the Structural Similarity Index (SSIM) and the Mean Square Error (MSE) were used to determine the image’s quality. In our study, we computed the SSIM values between pictures and their corresponding real images. When more training images are used, the images created have a significant degree of similarity to the original image. The results of our experiment reveal that the most leading SSIM values are achieved when 200 total images of [Formula: see text] pixels are utilized as input and the epoch is 2000.

Keywords