IEEE Access (Jan 2024)

Utilizing a Single-Stage 2D Detector in 3D LiDAR Point Cloud With Vertical Cylindrical Coordinate Projection for Human Identification

  • Nova Eka Budiyanta,
  • Eko Mulyanto Yuniarno,
  • Tsuyoshi Usagawa,
  • Mauridhi Hery Purnomo

DOI
https://doi.org/10.1109/ACCESS.2024.3402227
Journal volume & issue
Vol. 12
pp. 72672 – 72687

Abstract

Read online

Exploiting sensitive human data in human visual monitoring system violates individual’s privacy. Hence, this study utilized a Light Detection and Ranging (LiDAR)-generated three-dimensional (3D) point cloud instead of an RGB camera as it captures the human object without detailed imagery. Given their dispersed nature, processing 3D LiDAR point cloud is economically inefficient as it requires some preliminary actions. Alternatively, this study applied a single-stage detection process using only one data type, namely 3D LiDAR point cloud with vertical axis direct projection. This approach utilized the 3D LiDAR cylindrical coordinates features to project the data onto two-dimensional (2D) image space with various computational capabilities and back-projection algorithm to restore the identified object on the 2D plane onto 3D LiDAR point cloud coordinates. This model also implemented the YOLOv5 series due to their varied sizes. The evaluation of this approach utilized accuracy metric which was the average of mean average precision (mAP) value based on different intersections over union (IoU) thresholds. The proposed methodology effectively employed a vertical projection technique to identify human objects. Notably, this approach distinguishes itself from previous methods such as PIXOR, BirdNet, BirdNet+, BEVDetNet, and Frustum-PointPillars, offering a novel perspective in the field. In addition, the best and worst performing models had the accuracy values of 44.35% and 79.83%, with inference speeds of 3.7 ms and up to 25 ms, respectively. Further, the inference speeds of all models were less than 33.33 ms. Thus, the monitored objects were identified before the LiDAR system enters the next azimuth rotation.

Keywords