IEEE Access (Jan 2020)
Targetless Camera-LiDAR Calibration in Unstructured Environments
Abstract
The camera-Lidar sensor fusion plays an important role in autonomous navigation research. Nowadays, the automatic calibration of these sensors remains a significant challenge in mobile robotics. In this article, we present a novel calibration method that achieves an accurate six-degree-of-freedom (6-DOF) rigid-body transformation estimation (aka extrinsic parameters) between the camera and LiDAR sensors. This method consists of a novel co-registration approach that uses local edge features in arbitrary environments to get 3D-to-2D errors between the data of both, camera and LiDAR. Once we have 3D-to-2D errors, we estimate the relative transform, i.e., the extrinsic parameters, that minimizes these errors. In order to find the best transform solution, we use the perspective-three-point (P3P) algorithm. To refine the final calibration, we use a Kalman Filter, which gives the system high stability against noise disturbances. The presented method does not require, in any case, an artificial target, or a structured environment, and therefore, it is a target-less calibration. Furthermore, the method we present in this article does not require to achieve a dense point cloud, which holds the advantage of not needing a scan accumulation. To test our approach, we use the state-of-the-art Kitti dataset, taking the calibration provided by the dataset as the ground truth. In this way, we achieve accuracy results, and we demonstrate the robustness of the system against very noisy observations.
Keywords