IET Computer Vision (Oct 2013)
Real‐time depth enhancement by fusion for RGB‐D cameras
Abstract
This study presents a real‐time refinement procedure for depth data acquired by RGB‐D cameras. Data from RGB‐D cameras suffer from undesired artefacts such as edge inaccuracies or holes owing to occlusions or low object remission. In this work, the authors use recent depth enhancement filters intended for time‐of‐flight cameras, and extend them to structured light‐based depth cameras, such as the Kinect camera. Thus, given a depth map and its corresponding two‐dimensional image, we correct the depth measurements by separately treating its undesired regions. To that end, the authors propose specific confidence maps to tackle areas in the scene that require a special treatment. Furthermore, in the case of filtering artefacts, the authors introduce the use of RGB images as guidance images as an alternative to real‐time state‐of‐the‐art fusion filters that use greyscale guidance images. The experimental results show that the proposed fusion filter provides dense depth maps with corrected erroneous or invalid depth measurements and adjusted depth edges. In addition, the authors propose a mathematical formulation that enables to use the filter in real‐time applications.
Keywords