Computational Visual Media (Jan 2024)

Real-time distance field acceleration based free-viewpoint video synthesis for large sports fields

  • Yanran Dai,
  • Jing Li,
  • Yuqi Jiang,
  • Haidong Qin,
  • Bang Liang,
  • Shikuan Hong,
  • Haozhe Pan,
  • Tao Yang

DOI
https://doi.org/10.1007/s41095-022-0323-3
Journal volume & issue
Vol. 10, no. 2
pp. 331 – 353

Abstract

Read online

Abstract Free-viewpoint video allows the user to view objects from any virtual perspective, creating an immersive visual experience. This technology enhances the interactivity and freedom of multimedia performances. However, many free-viewpoint video synthesis methods hardly satisfy the requirement to work in real time with high precision, particularly for sports fields having large areas and numerous moving objects. To address these issues, we propose a free-viewpoint video synthesis method based on distance field acceleration. The central idea is to fuse multi-view distance field information and use it to adjust the search step size adaptively. Adaptive step size search is used in two ways: for fast estimation of multi-object three-dimensional surfaces, and synthetic view rendering based on global occlusion judgement. We have implemented our ideas using parallel computing for interactive display, using CUDA and OpenGL frameworks, and have used real-world and simulated experimental datasets for evaluation. The results show that the proposed method can render free-viewpoint videos with multiple objects on large sports fields at 25 fps. Furthermore, the visual quality of our synthetic novel viewpoint images exceeds that of state-of-the-art neural-rendering-based methods.

Keywords