Sensors (Feb 2023)

Geometry Sampling-Based Adaption to DCGAN for 3D Face Generation

  • Guoliang Luo,
  • Guoming Xiong,
  • Xiaojun Huang,
  • Xin Zhao,
  • Yang Tong,
  • Qiang Chen,
  • Zhiliang Zhu,
  • Haopeng Lei,
  • Juncong Lin

DOI
https://doi.org/10.3390/s23041937
Journal volume & issue
Vol. 23, no. 4
p. 1937

Abstract

Read online

Despite progress in the past decades, 3D shape acquisition techniques are still a threshold for various 3D face-based applications and have therefore attracted extensive research. Moreover, advanced 2D data generation models based on deep networks may not be directly applicable to 3D objects because of the different dimensionality of 2D and 3D data. In this work, we propose two novel sampling methods to represent 3D faces as matrix-like structured data that can better fit deep networks, namely (1) a geometric sampling method for the structured representation of 3D faces based on the intersection of iso-geodesic curves and radial curves, and (2) a depth-like map sampling method using the average depth of grid cells on the front surface. The above sampling methods can bridge the gap between unstructured 3D face models and powerful deep networks for an unsupervised generative 3D face model. In particular, the above approaches can obtain the structured representation of 3D faces, which enables us to adapt the 3D faces to the Deep Convolution Generative Adversarial Network (DCGAN) for 3D face generation to obtain better 3D faces with different expressions. We demonstrated the effectiveness of our generative model by producing a large variety of 3D faces with different expressions using the two novel down-sampling methods mentioned above.

Keywords