ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Jun 2024)

Global localization for Mixed Reality visualization using wireframe extraction from images

  • S. Einizinab,
  • S. Einizinab,
  • K. Khoshelham,
  • K. Khoshelham,
  • S. Winter,
  • P. Christopher,
  • P. Christopher

DOI
https://doi.org/10.5194/isprs-annals-X-4-W5-2024-119-2024
Journal volume & issue
Vol. X-4-W5-2024
pp. 119 – 126

Abstract

Read online

Mixed Reality (MR) global localization involves precisely tracking the device’s position and orientation within a digital representation, such as Building Information Model (BIM). Existing model-based MR global localization approaches have difficulty addressing environmental changes between the BIM and real-world, particularly in dynamic construction sites. Additionally, a significant challenge in MR systems arises from localization drift, where the gradual accumulation of positional errors over time can lead to inaccuracies in determining the device’s position and orientation within the virtual model. We develop a method that extracts structural elements of the building, referred to as a wireframe, which are less likely to change due to their inherent permanence. The extraction of these features is computationally inexpensive enough that can be performed on MR device, ensuring a reliable and continuous global localization over time, thereby overcoming issues associated with localization drift. The method incorporates a deep Convolutional Neural Network (CNN) to extract the 2D wireframes from images. The reconstruction of 3D wireframes is achieved by utilizing the extracted 2D wireframe along with their depth information. The simplified 3D wireframe is subsequently aligned with the BIM. Real-world experiments demonstrate the method’s effectiveness in 3D wireframe extraction and alignment with the BIM, successfully mitigating drift issues by 4cm in prolonged corridor scans.