IET Image Processing (Sep 2020)

Multi‐head mutual‐attention CycleGAN for unpaired image‐to‐image translation

  • Wei Ji,
  • Jing Guo,
  • Yun Li

DOI
https://doi.org/10.1049/iet-ipr.2019.1153
Journal volume & issue
Vol. 14, no. 11
pp. 2395 – 2402

Abstract

Read online

The image‐to‐image translation, i.e. from source image domain to target image domain, has made significant progress in recent years. The most popular method for unpaired image‐to‐image translation is CycleGAN. However, it always cannot accurately and rapidly learn the key features in target domains. So, the CycleGAN model learns slowly and the translation quality needs to be improved. In this study, a multi‐head mutual‐attention CycleGAN (MMA‐CycleGAN) model is proposed for unpaired image‐to‐image translation. In MMA‐CycleGAN, the cycle‐consistency loss and adversarial loss in CycleGAN are still used, but a mutual‐attention (MA) mechanism is introduced, which allows attention‐driven, long‐range dependency modelling between the two image domains. Moreover, to efficiently deal with the large image size, the MA is further improved to the multi‐head mutual‐attention (MMA) mechanism. On the other hand, domain labels are adopted to simplify the MMA‐CycleGAN architecture, so only one generator is required to perform bidirectional translation tasks. Experiments on multiple datasets demonstrate MMA‐CycleGAN is able to learn rapidly and obtain photo‐realistic images in a shorter time than CycleGAN.

Keywords