IEEE Access (Jan 2023)
Style-Content-Aware Adaptive Normalization Based Pose Guided for Person Image Synthesis
Abstract
Most of the tasks based on pose-guided person image synthesis have obtained accurate target pose, but still have not obtained reasonable style texture mapping. In this paper, we propose a new two-stage network to decouple style and content, which aims to enhance the accuracy of pose transfer and the realism of a person appearance. Firstly, we propose an Aligned Multi-scale Content Transfer Network(AMSNet) to predict the target edge map for pose content transfer in advance, which can not only preserve clearer texture content but also alleviate spatial misalignment through advancing to transfer pose information. Secondly, we propose a new Style Texture Transfer Network(STNet) to gradually transfer the source style features to the target pose to for reasonable distribution of styles. To achieve highly similar appearance texture to the source style, we use a style-content-aware adaptive normalization method. The source style features are mapped into the same latent space as aligned content images (target pose and edge), and consistency between style texture and content is enhanced through adaptive adjustment of source style and target pose. Experimental results show that the proposed model can synthesize target images consistent with the source style, achieving superior results both quantitatively and qualitatively.
Keywords