ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (May 2022)
POSE ESTIMATION THROUGH MASK-R CNN AND VSLAM IN LARGE-SCALE OUTDOORS AUGMENTED REALITY
Abstract
Deep Learning (DL) ingrained into Mobile Augmented Reality (MAR) enables a new information-delivery paradigm. In the context of 6 DoF pose estimation, powerful DL networks could provide a direct solution for AR systems. However, their concurrent operation requires a significant number of computations per frame and yields to both misclassifications and localization errors. In this paper, a hybrid and lightweight solution on 3D tracking of arbitrary geometry for outdoor MAR scenarios is presented. The camera pose information obtained by ARCore SDK and vSLAM algorithm is combined with the semantic and geometric output of a CNN-object detector to validate and improve tracking performance in large-scale and uncontrolled outdoor environments. The methodology involves three main steps: i) training of the Mask-R CNN model to extract the class, bounding box and mask predictions, ii) real-time detection, segmentation and localization of the region of interest (ROI) in camera frames, and iii) computation of 2D-3D correspondences to enhance pose estimation of a 3D overlay. The dataset holds 30 images of the rock of St. Modestos – Modi in Meteora, Greece in which the ROI is an area with characteristic geological features. The comparative evaluation between the prototype system and the original one, as well as with R-CNN and FAST-R CNN detectors demonstrates higher precision accuracy and stable visualization at half a kilometre distance, while tracking time has decreased at 42% during far-field AR session.