Journal of Hebei University of Science and Technology (Apr 2024)

Target detection and localization based on improved YOLOv5s and sensor fusion

  • Yuhong ZHENG,
  • Qingxi ZENG,
  • Xufang JI,
  • Rongchen WANG,
  • Yuxin SONG

DOI
https://doi.org/10.7535/hbkd.2024yx02002
Journal volume & issue
Vol. 45, no. 2
pp. 122 – 130

Abstract

Read online

As two important sensors in the process of unmanned vehicle environment perception, the camera cannot provide the position information of the road target, and the LiDAR point cloud is sparse, which makes it difficult to achieve good results in detection, so that a method was proposed which fuses the information of the two sensors for target detection and localization. YOLOv5s algorithm in deep learning was adopted for target detection, and the external parameters of camera and LIDAR were acquired through joint calibration to convert the coordinates between the sensors, so that the radar point cloud data can be projected into the camera image data, and finally the position information of the detected target was obtained. The real vehicle experiments were conducted. The results show that the algorithm can achieve a detection speed of 27.2 Hz on the unmanned vehicle autopilot platform equipped with TX2 embedded computing platform, and maintain a leakage rate of 12.50%, a maximum recognition distance of 35.32 m, and an average localization accuracy of 0.18 m over a period of time in the detection environment. The fusion of LiDAR and camera can achieve road target detection and localization in embedded system, providing a reference for the construction of environment perception systems on embedded platforms.

Keywords