CAAI Transactions on Intelligence Technology (Apr 2024)

Multi‐scale cross‐domain alignment for person image generation

  • Liyuan Ma,
  • Tingwei Gao,
  • Haibin Shen,
  • Kejie Huang

DOI
https://doi.org/10.1049/cit2.12224
Journal volume & issue
Vol. 9, no. 2
pp. 374 – 387

Abstract

Read online

Abstract Person image generation aims to generate images that maintain the original human appearance in different target poses. Recent works have revealed that the critical element in achieving this task is the alignment of appearance domain and pose domain. Previous alignment methods, such as appearance flow warping, correspondence learning and cross attention, often encounter challenges when it comes to producing fine texture details. These approaches suffer from limitations in accurately estimating appearance flows due to the lack of global receptive field. Alternatively, they can only perform cross‐domain alignment on high‐level feature maps with small spatial dimensions since the computational complexity increases quadratically with larger feature sizes. In this article, the significance of multi‐scale alignment, in both low‐level and high‐level domains, for ensuring reliable cross‐domain alignment of appearance and pose is demonstrated. To this end, a novel and effective method, named Multi‐scale Cross‐domain Alignment (MCA) is proposed. Firstly, MCA adopts global context aggregation transformer to model multi‐scale interaction between pose and appearance inputs, which employs pair‐wise window‐based cross attention. Furthermore, leveraging the integrated global source information for each target position, MCA applies flexible flow prediction head and point correlation to effectively conduct warping and fusing for final transformed person image generation. Our proposed MCA achieves superior performance on two popular datasets than other methods, which verifies the effectiveness of our approach.

Keywords