Sensors (Dec 2023)

YDD-SLAM: Indoor Dynamic Visual SLAM Fusing YOLOv5 with Depth Information

  • Peichao Cong,
  • Junjie Liu,
  • Jiaxing Li,
  • Yixuan Xiao,
  • Xilai Chen,
  • Xinjie Feng,
  • Xin Zhang

DOI
https://doi.org/10.3390/s23239592
Journal volume & issue
Vol. 23, no. 23
p. 9592

Abstract

Read online

Simultaneous location and mapping (SLAM) technology is key in robot autonomous navigation. Most visual SLAM (VSLAM) algorithms for dynamic environments cannot achieve sufficient positioning accuracy and real-time performance simultaneously. When the dynamic object proportion is too high, the VSLAM algorithm will collapse. To solve the above problems, this paper proposes an indoor dynamic VSLAM algorithm called YDD-SLAM based on ORB-SLAM3, which introduces the YOLOv5 object detection algorithm and integrates deep information. Firstly, the objects detected by YOLOv5 are divided into eight subcategories according to their motion characteristics and depth values. Secondly, the depth ranges of the dynamic object and potentially dynamic object in the moving state in the scene are calculated. Simultaneously, the depth value of the feature point in the detection box is compared with that of the feature point in the detection box to determine whether the point is a dynamic feature point; if it is, the dynamic feature point is eliminated. Further, multiple feature point optimization strategies were developed for VSLAM in dynamic environments. A public data set and an actual dynamic scenario were used for testing. The accuracy of the proposed algorithm was significantly improved compared to that of ORB-SLAM3. This work provides a theoretical foundation for the practical application of a dynamic VSLAM algorithm.

Keywords