Remote Sensing (Aug 2024)

MSPV3D: Multi-Scale Point-Voxels 3D Object Detection Net

  • Zheng Zhang,
  • Zhiping Bao,
  • Yun Wei,
  • Yongsheng Zhou,
  • Ming Li,
  • Qing Tian

DOI
https://doi.org/10.3390/rs16173146
Journal volume & issue
Vol. 16, no. 17
p. 3146

Abstract

Read online

Autonomous vehicle technology is advancing, with 3D object detection based on point clouds being crucial. However, point clouds’ irregularity, sparsity, and large data volume, coupled with irrelevant background points, hinder detection accuracy. We propose a two-stage multi-scale 3D object detection network. Firstly, considering that a large number of useless background points are usually generated by the ground during detection, we propose a new ground filtering algorithm to increase the proportion of foreground points and enhance the accuracy and efficiency of the two-stage detection. Secondly, given that different types of targets to be detected vary in size, and the use of a single-scale voxelization may result in excessive loss of detailed information, the voxels of different scales are introduced to extract relevant features of objects of different scales in the point clouds and integrate them into the second-stage detection. Lastly, a multi-scale feature fusion module is proposed, which simultaneously enhances and integrates features extracted from voxels of different scales. This module fully utilizes the valuable information present in the point cloud across various scales, ultimately leading to more precise 3D object detection. The experiment is conducted on the KITTI dataset and the nuScenes dataset. Compared with our baseline, “Pedestrian” detection improved by 3.37–2.72% and “Cyclist” detection by 3.79–1.32% across difficulty levels on KITTI, and was boosted by 2.4% in NDS and 3.6% in mAP on nuScenes.

Keywords