Xi'an Gongcheng Daxue xuebao (Dec 2021)

An example mapping method based on visual SLAM

  • Xiaohua WANG,
  • Yaoguang LI,
  • Wenjie WANG,
  • Lei ZHANG

DOI
https://doi.org/10.13338/j.issn.1674-649x.2021.06.008
Journal volume & issue
Vol. 35, no. 6
pp. 54 – 61

Abstract

Read online

In the process of simultaneous localization and mapping (SLAM) with vision sensors, the robot does not use the instance information, which leads to insufficient environmental perception and low accuracy of map construction. In this paper, an example mapping method for visual SLAM based on target detection and point cloud segmentation was proposed. Firstly, the improved lightweight target detection algorithm YOLOV4-tiny was used to extract the instance information of two-dimensional environmental images. Secondly, a point cloud segmentation method was presented. The three-dimensional point cloud corresponding to the two-dimensional image information was segmented into object instances to improve the segmentation accuracy. Finally, the segmented instance was imported into ORB-SLAM2 framework to build a high-precision point cloud map with instance information. The experimental results show that the improved YOLOV4-tiny target detection algorithm improves the detection accuracy by 8.1% on the constructed data set, and the improved point cloud segmentation method improves the average object segmentation rate by 12.5% compared with that of the LCCP algorithm. The accuracy of the instance map constructed in real environment is better than that of ORB-SLAM2 algorithm.

Keywords