Remote Sensing (Oct 2018)

Ground and Multi-Class Classification of Airborne Laser Scanner Point Clouds Using Fully Convolutional Networks

  • Aldino Rizaldy,
  • Claudio Persello,
  • Caroline Gevaert,
  • Sander Oude Elberink,
  • George Vosselman

DOI
https://doi.org/10.3390/rs10111723
Journal volume & issue
Vol. 10, no. 11
p. 1723

Abstract

Read online

Various classification methods have been developed to extract meaningful information from Airborne Laser Scanner (ALS) point clouds. However, the accuracy and the computational efficiency of the existing methods need to be improved, especially for the analysis of large datasets (e.g., at regional or national levels). In this paper, we present a novel deep learning approach to ground classification for Digital Terrain Model (DTM) extraction as well as for multi-class land-cover classification, delivering highly accurate classification results in a computationally efficient manner. Considering the top⁻down acquisition angle of ALS data, the point cloud is initially projected on the horizontal plane and converted into a multi-dimensional image. Then, classification techniques based on Fully Convolutional Networks (FCN) with dilated kernels are designed to perform pixel-wise image classification. Finally, labels are transferred from pixels to the original ALS points. We also designed a Multi-Scale FCN (MS-FCN) architecture to minimize the loss of information during the point-to-image conversion. In the ground classification experiment, we compared our method to a Convolutional Neural Network (CNN)-based method and LAStools software. We obtained a lower total error on both the International Society for Photogrammetry and Remote Sensing (ISPRS) filter test benchmark dataset and AHN-3 dataset in the Netherlands. In the multi-class classification experiment, our method resulted in higher precision and recall values compared to the traditional machine learning technique using Random Forest (RF); it accurately detected small buildings. The FCN achieved precision and recall values of 0.93 and 0.94 when RF obtained 0.91 and 0.92, respectively. Moreover, our strategy significantly improved the computational efficiency of state-of-the-art CNN-based methods, reducing the point-to-image conversion time from 47 h to 36 min in our experiments on the ISPRS filter test dataset. Misclassification errors remained in situations that were not included in the training dataset, such as large buildings and bridges, or contained noisy measurements.

Keywords