IEEE Access (Jan 2021)

Tunable U-Net: Controlling Image-to-Image Outputs Using a Tunable Scalar Value

  • Seokjun Kang,
  • Seiichi Uchida,
  • Brian Kenji Iwana

DOI
https://doi.org/10.1109/ACCESS.2021.3096530
Journal volume & issue
Vol. 9
pp. 103279 – 103290

Abstract

Read online

Image-to-image conversion tasks are more accurate and sophisticated than ever thanks to advances in deep learning. However, since typical deep learning models are trained to perform only one task, multiple trained models are required to perform each task even if they are related to each other. For example, the popular image-to-image convolutional neural network, U-Net, is normally trained for a single task. Based on U-Net, this study proposes a model that outputs variable results using only one trained model. The proposed method produces a continuously changing output by setting an external parameter. We confirm the robustness of our proposed model by evaluating it on binarization and background blurring. According to these evaluations, we confirmed that the proposed model can generate well-predicted outputs by using un-trained tuning parameters as well as the outputs by using trained tuning parameters. Furthermore, the proposed model can generate extrapolated outputs outside the learning range.

Keywords