PLoS ONE (Jan 2018)

SLAMM: Visual monocular SLAM with continuous mapping using multiple maps.

  • Hayyan Afeef Daoud,
  • Aznul Qalid Md Sabri,
  • Chu Kiong Loo,
  • Ali Mohammed Mansoor

DOI
https://doi.org/10.1371/journal.pone.0195878
Journal volume & issue
Vol. 13, no. 4
p. e0195878

Abstract

Read online

This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor's malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM.