IEEE Access (Jan 2020)

A High-Quality VR Calibration and Real-Time Stitching Framework Using Preprocessed Features

  • Saleh Saeed,
  • Muhammad Umer Kakli,
  • Yongju Cho,
  • Jeongil Seo,
  • Unsang Park

DOI
https://doi.org/10.1109/ACCESS.2020.3031413
Journal volume & issue
Vol. 8
pp. 190300 – 190311

Abstract

Read online

Virtual Reality (VR) contents include 360°×180° seamless panoramic videos, stitched from multiple overlapping video streams. Many commercial VR devices use a two-camera rig to capture VR content. These devices have increased radial distortion effects along the seams of the stitching boundary. Moreover, a fixed number of cameras in the camera rig makes the VR system non-scalable. Since the VR experience is directly related to the quality of VR content, it is desirable to create a VR framework that is scalable in terms of the number of cameras attached to the camera-rig and has better geometric and photometric quality. In this paper, we propose an end-to-end VR system for stitching full spherical content. The VR system is composed of camera rig calibration and stitching modules. The calibration module performs a geometric alignment of the camera rig. The stitching module transforms texture from the camera or video stream into the VR stream using lookup tables (LUTs) and blend masks (BMs). In this work, our main contribution is the improvement of stitching quality. First, we propose a feature preprocessing method that filters out inconsistent, error-prone features. Secondly, we propose a geometric alignment method that outperforms state-of-the-art VR stitching solutions. We tested our system on diverse image sets and obtained state-of-the-art geometric alignment. Moreover, we achieved real-time stitching of camera and video streams up to 120 fps at 4K resolution. After stitching, we encode VR content for IP multicasting.

Keywords