Engineering Reports (Dec 2024)
High‐order multilayer attention fusion network for 3D object detection
Abstract
Abstract Three‐dimensional object detection based on the fusion of 2D image data and 3D point clouds has become a research hotspot in the field of 3D scene understanding. However, different sensor data have discrepancies in spatial position, scale, and alignment, which severely impact detection performance. Inappropriate fusion methods can lead to the loss and interference of valuable information. Therefore, we propose the High‐Order Multi‐Level Attention Fusion Network (HMAF‐Net), which takes camera images and voxelized point clouds as inputs for 3D object detection. To enhance the expressive power between different modality features, we introduce a high‐order feature fusion module that performs multi‐level convolution operations on the element‐wise summed features. By incorporating filtering and non‐linear activation, we extract deep semantic information from the fused multi‐modal features. To maximize the effectiveness of the fused salient feature information, we introduce an attention mechanism that dynamically evaluates the importance of pooled features at each level, enabling adaptive weighted fusion of significant and secondary features. To validate the effectiveness of HMAF‐Net, we conduct experiments on the KITTI dataset. In the “Car,” “Pedestrian,” and “Cyclist” categories, HMAF‐Net achieves mAP performances of 81.78%, 60.09%, and 63.91%, respectively, demonstrating more stable performance compared to other multi‐modal methods. Furthermore, we further evaluate the framework's effectiveness and generalization capability through the KITTI benchmark test, and compare its performance with other published detection methods on the 3D detection benchmark and BEV detection benchmark for the “Car” category, showing excellent results. The code and model will be made available on https://github.com/baowenzhang/High‐order‐Multilayer‐Attention‐Fusion‐Network‐for‐3D‐Object‐Detection.
Keywords