IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2025)

DLAFNet: Direct LiDAR-Aerial Fusion Network for Semantic Segmentation of 2-D Aerial Image and 3-D LiDAR Point Cloud

  • Wei Liu,
  • He Wang,
  • Yicheng Qiao,
  • Haopeng Zhang,
  • Junli Yang

DOI
https://doi.org/10.1109/JSTARS.2024.3511517
Journal volume & issue
Vol. 18
pp. 1864 – 1875

Abstract

Read online

High-resolution remote sensing image segmentation has advanced significantly with 2-D convolutional neural networks and transformer-based models like SegFormer and Swin Transformer. Concurrently, the rapid development of 3-D convolution techniques has driven advancements in methods like PointNet and Kernel Point Convolution for 3-D LiDAR point cloud segmentation. Traditional fusion of aerial imagery and LiDAR data often relies on digital surface models or other features extracted from LiDAR point clouds, incorporating them as depth channels into image data. In this article, we propose a novel approach called Direct LiDAR-Aerial Fusion Network, which directly integrates multispectral images (RGB) and LiDAR point cloud data for semantic segmentation. Experiments on the modified GRSS18 dataset demonstrate that our method achieves an overall accuracy (OA) of 79.88%, outperforming conventional approaches. By fusing RGB and LiDAR features, our technique improves OA by 1.77% and mean Intersection over Union by 0.83%.

Keywords