IEEE Access (Jan 2022)

OmniVO: Toward Robust Omni Directional Visual Odometry With Multicamera Collaboration for Challenging Conditions

  • Zeeshan Javed,
  • Gon-Woo Kim

DOI
https://doi.org/10.1109/ACCESS.2022.3204870
Journal volume & issue
Vol. 10
pp. 99861 – 99874

Abstract

Read online

With the recent developments in computer vision, vision-based odometry plays an increasingly important role in the field of autonomous systems. However, using traditional visual odometry or simultaneous localization and mapping (vSLAM) only performs better in simple environments with obvious structural features. Visual odometry may easily fail in a complex environment due to the sparsity of stable features, sensor failure, and extreme weather conditions or sunlight problems. Monocular camera-based traditional algorithms are highly affected by these issues, leading to stability and reliability problems. Therefore, to deal with such issues, a vision-based Omni-directional odometry based on multi-camera collaboration is proposed. The development of feature-based omnidirectional odometry and feature prioritization to limit the computational complexity offered by multiple cameras are major contributions. Firstly, the multi-camera state perception model is developed based on the spherical camera, which guarantees an accurate transformation from the camera to the spherical coordinate system. The feature detection and tracking are performed in the images of the individual cameras in a parallel thread. The features tracking across the different cameras prevent failures, however, it also adds extra computational complexity for the pose estimation and optimization module. To budget the feature distribution, a feature prioritization algorithm is proposed to limit the number of features. Furthermore, the multi-view pose refinement module further reduces the complexity of the system. The feature prioritization helps to maintain the smaller set of tracked features with similar accuracy and less computational complexity. Finally, the pose is estimated in a spherical coordinate system by projecting all the successfully tracked key points to the sphere with the help of the omnidirectional perception model. To validate the proposed method, the data set is collected from the outdoor environment with ground truth provided by high-accuracy GPS. Detailed qualitative and quantitative evaluations are performed, which show that the proposed algorithm improves the position accuracy by about 40%-60% as compared with state-of-the-art methods with limited computation time.

Keywords