The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (May 2024)

Dense collaborative mapping with deep visual SLAM method

  • P. Pan,
  • W. Zhang,
  • N. Haala

DOI
https://doi.org/10.5194/isprs-archives-XLVIII-1-2024-547-2024
Journal volume & issue
Vol. XLVIII-1-2024
pp. 547 – 554

Abstract

Read online

The creation of highly accurate and collaborative mapping algorithms is crucial for the progress of SLAM technology, as it greatly improve the efficiency of building detailed maps. In the area of mapping based on single moving trajectory, DROID-SLAM (Differentiable Recurrent Optimization-Inspired Design) by (Teed and Deng, 2021) stands out as an innovative method based on deep learning, providing a visual-only solution that works with various types of camera, such as monocular, stereo, and RGB-D. Its ability to create maps with excellent accuracy makes it superior to well-known methods like ORB-SLAM3 by (Campos et al., 2021). Despite its impressive individual mapping performance, DROID-SLAM does not account for scenarios involving multisession data or the collaborative map creation by multiple agents. To address this problem, we propose two collaborative map construction algorithms built upon DROID-SLAM. Compared to prior methods that compute explicit relative transformations for loop closures, our algorithm leverages the power of deep learning-based bundle adjustment, using dense per-pixel correspondence, to merge into a globally consistent state. These algorithms have been thoroughly tested with stereo and RGB-D models. we validated the effectiveness of our proposed algorithms on both public and self-collected datasets, showing higher accuracy than prior methods. By leveraging the strengths of DROID-SLAM while addressing its limitations with our novel algorithms, we extend the application scenarios of this method and provide a new way of thinking about collaborative mapping.