IEEE Access (Jan 2024)

Single-View 3D Garment Reconstruction Using Neural Volumetric Rendering

  • Yizheng Chen,
  • Rengan Xie,
  • Sen Yang,
  • Linchen Dai,
  • Hongchun Sun,
  • Yuchi Huo,
  • Rong Li

DOI
https://doi.org/10.1109/ACCESS.2024.3380059
Journal volume & issue
Vol. 12
pp. 49682 – 49693

Abstract

Read online

Reconstructing 3D garment models usually requires laborious data-fetching processes, such as expensive lidar, multiple-view images, or SMPL models of the garments. In this paper, we propose a neat framework that takes single-image inputs for generating pseudo-sparse views of 3D garments and synthesizing multi-view images into a high-quality 3D neural model. Specifically, our framework combines a pretrained pseudo sparse view generator and a volumetric signed distance function (SDF) representation-based network for 3D garment modeling, which uses neural networks to represent both the density and radiance fields. We further introduce a stride fusion strategy to minimize the pixel-level loss in key viewpoints and semantic loss in random viewpoints, which produces view-consistent geometry and sharp texture details. Finally, a multi-view rendering module utilizes the learned SDF representation to generate multi-view garment images and extract accurate mesh and texture from them. We evaluate our proposed framework on the Deep Fashion 3D dataset and achieve state-of-the-art performance in terms of both quantitative and qualitative evaluations.

Keywords