Computational Visual Media (Mar 2023)

Neural 3D reconstruction from sparse views using geometric priors

  • Tai-Jiang Mu,
  • Hao-Xiang Chen,
  • Jun-Xiong Cai,
  • Ning Guo

DOI
https://doi.org/10.1007/s41095-023-0337-5
Journal volume & issue
Vol. 9, no. 4
pp. 687 – 697

Abstract

Read online

Abstract Sparse view 3D reconstruction has attracted increasing attention with the development of neural implicit 3D representation. Existing methods usually only make use of 2D views, requiring a dense set of input views for accurate 3D reconstruction. In this paper, we show that accurate 3D reconstruction can be achieved by incorporating geometric priors into neural implicit 3D reconstruction. Our method adopts the signed distance function as the 3D representation, and learns a generalizable 3D surface reconstruction model from sparse views. Specifically, we build a more effective and sparse feature volume from the input views by using corresponding depth maps, which can be provided by depth sensors or directly predicted from the input views. We recover better geometric details by imposing both depth and surface normal constraints in addition to the color loss when training the neural implicit 3D representation. Experiments demonstrate that our method both outperforms state-of-the-art approaches, and achieves good generalizability.

Keywords