Journal of Robotics (Jan 2022)

Learning a Robust Hybrid Descriptor for Robot Visual Localization

  • Qingwu Shi,
  • Junjun Wu,
  • Zeqin Lin,
  • Ningwei Qin

DOI
https://doi.org/10.1155/2022/9354909
Journal volume & issue
Vol. 2022

Abstract

Read online

Long-term robust visual localization is one of the main challenges of long-term visual navigation for mobile robots. Due to factors such as illumination, weather, and season, mobile robots continuously navigate with visual information in a complex scene, which is likely to lead to failure localization within a few hours. However, semantic segmentation images will be more stable than the original images against considerable drastically variable environments; therefore, to make full use of the advantages of both semantic segmentation image and its original image, this paper solves the above problems with the latest work of semantic segmentation and proposes the novel hybrid descriptor for long-term visual localization, which is generated by combining a semantic image descriptor extracted from segmentation images and an image descriptor extracted from RGB images with a certain weight, and then trained by a convolutional neural network. Our experiments show that the localization performance of our method combining the advantages of semantic image descriptor and image descriptor is superior to those long-term visual localization methods with only an image descriptor or semantic image descriptor. Finally, our experimental results mostly exceed state-of-the-art 2D image-based localization methods under various challenging environmental conditions in the Extended CMU Seasons and RobotCar Seasons datasets in specific precision metrics.