IEEE Access (Jan 2023)

Improving Generative Adversarial Networks for Patch-Based Unpaired Image-to-Image Translation

  • Moritz Bohland,
  • Roman Bruch,
  • Simon Bauerle,
  • Luca Rettenberger,
  • Markus Reischl

DOI
https://doi.org/10.1109/ACCESS.2023.3331819
Journal volume & issue
Vol. 11
pp. 127895 – 127906

Abstract

Read online

Deep learning models for image segmentation achieve high-quality results, but need large amounts of training data. Training data is primarily annotated manually, which is time-consuming and often not feasible for large-scale 2D and 3D images. Manual annotation can be reduced using synthetic training data generated by generative adversarial networks that perform unpaired image-to-image translation. As of now, large images need to be processed patch-wise during inference, resulting in local artifacts in border regions after merging the individual patches. To reduce these artifacts, we propose a new method that integrates overlapping patches into the training process. We incorporated our method into CycleGAN and tested it on our new 2D tiling strategy benchmark dataset. The results show that the artifacts are reduced by 85% compared to state-of-the-art weighted tiling. While our method increases training time, inference time decreases. Additionally, we demonstrate transferability to real-world 3D biological image data, receiving a high-quality synthetic dataset. Increasing the quality of synthetic training datasets can reduce manual annotation, increase the quality of model output, and can help develop and evaluate deep learning models.

Keywords