IEEE Access (Jan 2020)

Construction of Virtual Video Scene and Its Visualization During Sports Training

  • Rui Yuan,
  • Zhendong Zhang,
  • Pengwei Song,
  • Jia Zhang,
  • Long Qin

DOI
https://doi.org/10.1109/ACCESS.2020.3007897
Journal volume & issue
Vol. 8
pp. 124999 – 125012

Abstract

Read online

This article studies the actual captured human motion data for human motion synthesis and style transfer, constructs a scene of motion virtual video, and attempts to directly generate human motion style video to establish a sports style transfer model that combines and self-encoding. The original human motion capture data mapped to the motion feature space for style transfer synthesis. The coding network used to map the high-dimensional motion capture data to the low-dimensional feature space, and the motion style transfer constraints established in the feature space, and the human body motion results after the style transfer obtained by decoding. This paper proposes a pixel-level human motion style transfer model based on conditional adversarial networks and uses convolution and convolution to establish two branch coding networks to extract the features of the input style video and content pictures. The decoding network decodes the combined two features and generates a human motion video data frame by frame. The Gram matrix establishes constraints on the encoding and decoding features, controls the movement style of the human body, and finally realizes the visualization of the movement process. The incremental learning method based on the cascade network can improve the high accuracy and achieve the posture measurement frequency of 200Hz. The research results provide a key foundation for improving the immersion sensation of sport visual and tactile interaction simulation.

Keywords