IEEE Access (Jan 2022)
Efficient Camera–LiDAR Calibration Using Accumulated LiDAR Frames
Abstract
In autonomous driving, the camera and LiDAR are complementary to each other through their strengths, and many autonomous vehicles and robot recognition systems utilize cameras and LiDAR. In order to effectively fuse the data of the camera and LiDAR attached to different positions and angles, we must perform camera–LiDAR extrinsic calibration. Most existing camera–LiDAR calibration methods infer the results by constructing a camera–LiDAR feature pair using features acquired from a single frame. It has the disadvantage that it is challenging to draw results. In this paper, we used sequential LiDAR data to extract features by accumulating LiDAR frames effectively. By using the location information between the accumulated frame (global) and the single frame (local) to convert the features detected from the global to the local, the shortcomings of LiDAR’s single frame with few features were supplemented. Methods for detecting feature points in LiDAR are described step-by-step, and the advantages and excellence of this camera–LiDAR calibration system are shown through quantitative/qualitative evaluation methods. As a result, the proposed system outputs superior performance compared to other systems.
Keywords