IET Computer Vision (Sep 2021)

Robust 3D face reconstruction from single noisy depth image through semantic consistency

  • Peixin Li,
  • Yuru Pei,
  • Yicheng Zhong,
  • Yuke Guo,
  • Hongbin Zha

DOI
https://doi.org/10.1049/cvi2.12024
Journal volume & issue
Vol. 15, no. 6
pp. 393 – 404

Abstract

Read online

Abstract This paper addresses the 3D face reconstruction and semantic annotation from a single‐view noisy depth image. A deep neural network‐based coarse‐to‐fine framework is presented to take advantage of 3D morphable model (3DMM) regression and per‐vertex geometry refinement. The low‐dimensional subspace coefficients of the 3DMM initialize the global facial geometry, being prone to be over‐smooth because of the low‐pass characteristics of the shape subspace. The proposed geometry refinement subnetwork predicts per‐vertex displacements to enrich local details, which is learned from unlabelled noisy depth images based on the registration‐like loss. In order to guarantee the semantic correspondence between the resultant 3D face and the depth image, a semantic consistency constraint is introduced to adapt an annotation model learned from the synthetic data to real noisy depth images. The resultant depth annotations are required to be consistent with the label propagation from the coarse and refined parametric 3D faces. The proposed coarse‐to‐fine reconstruction scheme and the semantic consistency constraint are evaluated on the depth‐based 3D face reconstruction and semantic annotation. The series of experiments demonstrate that the proposed approach achieves the performance improvements over compared methods regarding 3D face reconstruction and depth image annotation.