IEEE Access (Jan 2024)
Enhancing Object Estimation by Camera-LiDAR Sensor Fusion Using IMM-KF With Error Characteristics in Autonomous Robot Systems
Abstract
In autonomous robot systems, accurate object recognition and estimation are crucial for ensuring reliable performance, and sensor fusion techniques that combine complementary sensors have proven to be effective in achieving this. This paper proposes a Light Detection And Ranging (LiDAR) and a camera sensor fusion method to improve object recognition and estimation performance in autonomous robot systems. We first calibrate the camera and LiDAR sensor to implement the proposed method. Then, data association is performed between the LiDAR sensor’s data and the object’s bounding box data, which is identified through the camera sensor with a deep learning algorithm. To improve the performance of object recognition and estimation, we identified the limitations of single-sensor recognition and measurement and established measurement noise covariance through the analysis of each sensor’s distance measurement errors. After that, we applied an Interacting Multiple Model (IMM)-Kalman Filter (KF) considering the pre-analyzed error characteristics. The usefulness of the proposed method was validated through scenario-based experiments. Experimental results show that the proposed sensor fusion method significantly improves the accuracy of object estimation and extends the Field of View (FoV) of the sensors over the conventional methods.
Keywords