IEEE Access (Jan 2024)

3D Clothed Human Body Generation Method Based on Inter-Frame Motion Prediction of 2D Images

  • Shaojiang Liu,
  • Zhiming Xu,
  • Zhijun Zheng,
  • Jinting Zhang,
  • Danyu Li,
  • Zemin Qiu

DOI
https://doi.org/10.1109/ACCESS.2024.3381497
Journal volume & issue
Vol. 12
pp. 47146 – 47154

Abstract

Read online

With the rapid progress of computer vision and deep learning techniques, accurately predicting continuous human motions from very few image inputs and generating high-quality 3D human models has become a cutting-edge research direction in this field. Despite the achievements in 2D to 3D conversion techniques, it is still a great challenge to capture coherent movements from limited image frames and generate texture-rich 3D models. In this paper, we propose a 3D clothed human body generation method based on Inter-Frame Motion Prediction (IFMP for short) of 2D images, which is capable of not only predicting a series of coherent human body motions, but also reconstructing a detailed textured 3D human body model from only two image frames. The method automatically focuses on key parts of the image through action coding and uses a conditional generative adversarial network to generate a series of consecutive intermediate frame images. A depth-aware implicit function representation is combined to map a 3D model from the 2D image, and high quality textures of the human body in a clothed state are obtained by texture mapping and model detail enhancement methods. Finally, the experimental results validate the advantages of the IFMP method in image action coherence prediction, as well as verifying the effectiveness of the human 3D model generated by the method in terms of geometric accuracy and texture quality.

Keywords