IEEE Access (Jan 2020)

Robust Localization System Fusing Vision and Lidar Under Severe Occlusion

  • Yongliang Shi,
  • Weimin Zhang,
  • Fangxing Li,
  • Qiang Huang

DOI
https://doi.org/10.1109/ACCESS.2020.2981520
Journal volume & issue
Vol. 8
pp. 62495 – 62504

Abstract

Read online

Localization is one of the most fundamental problems for mobile robot. Aiming at the phenomenon that robot is prone to be lost in navigation under severe occlusion, a robust localization system combining vision and lidar is proposed in this paper. The system is split into off-line stage and online stage. In the off-line stage, this paper introduces a method of actively detecting and recording visual landmarks, and an off-line visual bag-of-words is generated from the recorded landmarks training. In the online stage, the prediction and update phase of Adaptive Monte Carlo Localization (AMCL) are improved respectively to enhance the performance of localization. The prediction phase generates the proposal distribution according to the prior information obtained through retrieving visual landmarks, and the newly proposed measurement model that selects reliable beams of lidar as the observation is to update the prediction. Experiments is carried out under strict conditions, that is 60% of the lidar is occluded, 1/12 of the beams are regarded as observation, and only 300 particles were adopted at most, it is shown that, no matter in the global localization or pose tracking, the localization system proposed in this paper performs much better than the state of art localization algorithm AMCL.

Keywords