IEEE Access (Jan 2023)

Style Transfer of Thangka Images Highlighting Style Attributes

  • Wenjin Hu,
  • Huafei Song,
  • Fujun Zhang,
  • Yinqiu Zhao,
  • Xinyue Shi

DOI
https://doi.org/10.1109/ACCESS.2023.3318258
Journal volume & issue
Vol. 11
pp. 104817 – 104829

Abstract

Read online

The HAA-GAN (Highlighting Artistic Attributes Generative Adversarial Net-work) style migration model is proposed to address the problem of poor expression of image artistic attributes and mismatch between semantic and stylistic features in images generated from Tangka image style transfer experiments. The image features extracted by the encoder are text dual-channeled feature transfer and feature refinement according to the style domain attribute refinement channel and the text self-attention semantic information matching channel. The former can highlight the artistic attributes of the images, while the latter can flexibly match the Tangka image style features to the content features according to the semantic space distribution of the content images. At the same time, adding a pooling layer based on wavelet multilevel transform to the codec structure of the generator can retain more key features of the image during the feature extraction process. Finally, in order to improve the quality of style transfer images, a multi-scale discriminative network structure is used to discriminate the image blocks, resulting in better visual effects of the generated images. The experimental results show that HAA-GAN conducted style transfer experiments on Thangka dataset and Impressionism dataset. Compared with SAFIN, DSTN and other style transfer models, the average score of FID and PSNR indicators increased by 9.06% and 12.7%, and the generation of stylized renderings achieved better results in structural detail maintenance, feature matching and artistic effect expression.

Keywords