Sensors (Apr 2024)

Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion

  • Harin Jang,
  • Taehyun Kim,
  • Kyungjae Ahn,
  • Soo Jeon,
  • Yeonsik Kang

DOI
https://doi.org/10.3390/s24092828
Journal volume & issue
Vol. 24, no. 9
p. 2828

Abstract

Read online

In the field of robotics and autonomous driving, dynamic occupancy grid maps (DOGMs) are typically used to represent the position and velocity information of objects. Although three-dimensional light detection and ranging (LiDAR) sensor-based DOGMs have been actively researched, they have limitations, as they cannot classify types of objects. Therefore, in this study, a deep learning-based camera–LiDAR sensor fusion technique is employed as input to DOGMs. Consequently, not only the position and velocity information of objects but also their class information can be updated, expanding the application areas of DOGMs. Moreover, unclassified LiDAR point measurements contribute to the formation of a map of the surrounding environment, improving the reliability of perception by registering objects that were not classified by deep learning. To achieve this, we developed update rules on the basis of the Dempster–Shafer evidence theory, incorporating class information and the uncertainty of objects occupying grid cells. Furthermore, we analyzed the accuracy of the velocity estimation using two update models. One assigns the occupancy probability only to the edges of the oriented bounding box, whereas the other assigns the occupancy probability to the entire area of the box. The performance of the developed perception technique is evaluated using the public nuScenes dataset. The developed DOGM with object class information will help autonomous vehicles to navigate in complex urban driving environments by providing them with rich information, such as the class and velocity of nearby obstacles.

Keywords