IEEE Access (Jan 2019)

Underwater Image Enhancement With a Deep Residual Framework

  • Peng Liu,
  • Guoyu Wang,
  • Hao Qi,
  • Chufeng Zhang,
  • Haiyong Zheng,
  • Zhibin Yu

DOI
https://doi.org/10.1109/ACCESS.2019.2928976
Journal volume & issue
Vol. 7
pp. 94614 – 94629

Abstract

Read online

Owing to refraction, absorption, and scattering of light by suspended particles in water, raw underwater images have low contrast, blurred details, and color distortion. These characteristics can significantly interfere with visual tasks, such as segmentation and tracking. This paper proposes an underwater image enhancement solution through a deep residual framework. First, the cycle-consistent adversarial networks (CycleGAN) is employed to generate synthetic underwater images as training data for convolution neural network models. Second, the very-deep super-resolution reconstruction model (VDSR) is introduced to underwater resolution applications; with it, the Underwater Resnet model is proposed, which is a residual learning model for underwater image enhancement tasks. Furthermore, the loss function and training mode are improved. A multi-term loss function is formed with mean squared error loss and a proposed edge difference loss. An asynchronous training mode is also proposed to improve the performance of the multi-term loss function. Finally, the impact of batch normalization is discussed. According to the underwater image enhancement experiments and a comparative analysis, the color correction and detail enhancement performance of the proposed methods are superior to that of previous deep learning models and traditional methods.

Keywords