IEEE Access (Jan 2020)

Object Recognition Based Interpolation With 3D LIDAR and Vision for Autonomous Driving of an Intelligent Vehicle

  • Ihn-Sik Weon,
  • Soon-Geul Lee,
  • Jae-Kwan Ryu

DOI
https://doi.org/10.1109/ACCESS.2020.2982681
Journal volume & issue
Vol. 8
pp. 65599 – 65608

Abstract

Read online

An algorithm has been developed for fusing 3D LIDAR (Light Detection and Ranging) systems that receive objects detected in deep learning-based image sensors and object data in the form of 3D point clouds. 3D LIDAR represents 3D point data in a planar rectangular coordinate system with a 360° representation of the detected object surface, including the front face. However, only the direction and distance data of the object can be obtained, and point cloud data cannot be used to create a specific definition of the object. Therefore, only the movement of the point cloud data can be tracked using probability and classification algorithms based on image processing. To overcome this limitation, the study matches 3D LIDAR data with 2D image data through the fusion of hybrid level multi-sensors. First, because 3D LIDAR data represents all objects in the sensor's detection range as dots, all unnecessary data, including ground data, is filtered out. The 3D Random Sample Consensus (RANSAC) algorithm enables the extraction of ground data perpendicular to the reference estimation 3D plane and data at both ends through ground estimation. Classified environmental data facilitates the labeling of all objects within the viewing angle of 3D LIDAR based on the presence or absence of movement. The path of motion of the platform can be established by detecting whether objects within the region of interest are movable or static. Because LIDAR is based on 8- and 16-channel rotation mechanisms, real-time data cannot be used to define objects. Instead, point clouds can be used to detect obstacles in the image through deep learning in the preliminary processing phase of the classification algorithm. By matching the labeling information of defined objects with the classified object cloud data obtained using 3D LIDAR, the exact dynamic trajectory and position of the defined objects can be calculated. Consequently, to process the acquired object data efficiently, we devised an active-region-of-interest technique to ensure a fast processing speed while maintaining a high detection rate.

Keywords