Applied Sciences (Jan 2025)
Breaking New Ground in Monocular Depth Estimation with Dynamic Iterative Refinement and Scale Consistency
Abstract
Monocular depth estimation (MDE) is a critical task in computer vision with applications in autonomous driving, robotics, and augmented reality. However, predicting depth from a single image poses significant challenges, especially in dynamic scenes where moving objects introduce scale ambiguity and inaccuracies. In this paper, we propose the Dynamic Iterative Monocular Depth Estimation (DI-MDE) framework, which integrates an iterative refinement process with a novel scale-alignment module to address these issues. Our approach combines elastic depth bins that adjust dynamically based on uncertainty estimates with a scale-alignment mechanism to ensure consistency between static and dynamic regions. Leveraging self-supervised learning, DI-MDE does not require ground truth depth labels, making it scalable and applicable to real-world environments. Experimental results on standard datasets such as SUN RGB-D and KITTI demonstrate that our method achieves state-of-the-art performance, significantly improving depth prediction accuracy in dynamic scenes. This work contributes a robust and efficient solution to the challenges of monocular depth estimation, offering advancements in both depth refinement and scale consistency.
Keywords