IEEE Access (Jan 2024)

Fast 3D Stylized Gaussian Portrait Generation From a Single Image With Style Aligned Sampling Loss

  • Shangming Jiang,
  • Xinyou Yu,
  • Weijun Guo,
  • Junling Huang

DOI
https://doi.org/10.1109/ACCESS.2024.3392568
Journal volume & issue
Vol. 12
pp. 58651 – 58660

Abstract

Read online

Creating stylized 3D avatars and portraits from just a single image input is an emerging challenge in augmented and virtual reality. While prior work has explored 2D stylization or 3D avatar generation, achieving high-fidelity 3D stylized portraits with text control remains an open problem. In this paper, we present an efficient approach for generating high-quality 3D stylized portraits directly from a single input image. Our core representations are based on 3D Gaussian Splatting for efficient rendering, along with a surface-guided splitting and cloning strategy to reduce noise. To achieve high-fidelity stylized results, we introduce a Stylized Generation Module with a Style-Aligned Sampling Loss that injects the input image’s identity information into the diffusion model while stabilizing the stylization process. Furthermore, we incorporate a multi-view diffusion model to enforce 3D consistency by generating multiple viewpoints. Extensive experimentation demonstrates that our approach outperforms existing methods in terms of stylization quality, 3D consistency, and user preference ratings. Our framework enables casual users to easily generate stylized 3D portraits through simple image or text inputs, facilitating engaging experiences in AR/VR applications.

Keywords