IEEE Access (Jan 2024)
DP-Loc: Visual Localization in 2D Maps Using an Embedded Depth Prior
Abstract
Recent advancements in cost-effective image-based localization using 2D maps have garnered significant attention, inspired by humans’ ability to navigate with such maps. This study addresses the limitations of monocular vision-based systems, specifically inaccurate depth information and loss of geometric details, which hinder precise localization. We propose a novel neural network framework that incorporates a pretrained metric depth estimation model, such as Zoedepth, to accurately measure absolute distances and enhance map matching between 2D maps and images. Our approach introduces two key modules: an Explicit Depth Prior Fusion (EDPF) module, which constructs a depth score volume using depth maps, and an Implicit Depth Prior Fusion (IDPF) module, which integrates depth and semantic features early through positional encoding. These modules enable a single-layer-scale classifier to learn essential features for effective localization. Notably, the IDPF model with positional encoding showed over 10% performance improvement on the Mapillary dataset compared to the baseline, underscoring the advantages of combining semantic and geometric information. The proposed DP-Loc approach provides a cost-efficient solution for visual localization by leveraging publicly accessible 2D maps and monocular image inputs, making it applicable to autonomous driving, robotics, and augmented reality.
Keywords