IEEE Access (Jan 2018)

The Synthesis of Unpaired Underwater Images Using a Multistyle Generative Adversarial Network

  • Na Li,
  • Ziqiang Zheng,
  • Shaoyong Zhang,
  • Zhibin Yu,
  • Haiyong Zheng,
  • Bing Zheng

DOI
https://doi.org/10.1109/ACCESS.2018.2870854
Journal volume & issue
Vol. 6
pp. 54241 – 54257

Abstract

Read online

Underwater image datasets are crucial in underwater vision research. Because of the strong absorption and scattering effects that occur underwater, some ground truth such as the depth map, which can be easily collected in-air, becomes a great challenge in underwater environments. To solve the issues associated with the lack of underwater ground truth, we propose a trainable end-to-end system of an underwater multistyle generative adversarial network (UMGAN) that takes advantage of a cycle-consistent adversarial network (CycleGAN) and conditional generative adversarial networks. This system can generate multiple realistic underwater images from in-air images using a hybrid adversarial system and an unpaired method. Moreover, our model can translate in-air images to underwater images that retain the main content and structural information of the in-air images under specified turbidities or water styles through a style classifier and a conditional vector. Furthermore, we define the color loss and include the structural similarity index measure loss for the system to preserve the content and structure of original in-air images while transferring the backgrounds of the images from air to water. Using UMGAN, we can take advantage of the in-air ground truth and convert the corresponding in-air images into an underwater dataset with multiple water color styles. Our experiments demonstrate that our synthesized underwater images have a high score on image assessment against CycleGAN, WaterGAN, StarGAN, AdaIN, and other state-of-the-art methods. We also show that our synthesized underwater images with in-air depths can be applied to real underwater image depth map estimation.

Keywords