Virtual Reality & Intelligent Hardware (Oct 2024)

Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models

  • Robert KOSK,
  • Richard SOUTHERN,
  • Lihua YOU,
  • Shaojun BIAN,
  • Willem KOKKE,
  • Greg MAGUIRE

Journal volume & issue
Vol. 6, no. 5
pp. 383 – 395

Abstract

Read online

Background: Deep 3D morphable models (deep 3DMMs) play an essential role in computer vision. They are used in facial synthesis, compression, reconstruction and animation, avatar creation, virtual try-on, facial recognition systems and medical imaging. These applications require high spatial and perceptual quality of synthesised meshes. Despite their significance, these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics. Methods: We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes. This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L1 and L2 norm metrics and underperforms on perceptual metrics. In contrast, using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error. The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives. Results: The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.

Keywords