Remote Sensing (Sep 2023)

Mapping Buildings across Heterogeneous Landscapes: Machine Learning and Deep Learning Applied to Multi-Modal Remote Sensing Data

  • Rachel E. Mason,
  • Nicholas R. Vaughn,
  • Gregory P. Asner

DOI
https://doi.org/10.3390/rs15184389
Journal volume & issue
Vol. 15, no. 18
p. 4389

Abstract

Read online

We describe the production of maps of buildings on Hawai’i Island, based on complementary information contained in two different types of remote sensing data. The maps cover 3200 km2 over a highly varied set of landscape types and building densities. A convolutional neural network was first trained to identify building candidates in LiDAR data. To better differentiate between true buildings and false positives, the CNN-based building probability map was then used, together with 400–2400 nm imaging spectroscopy, as input to a gradient boosting model. Simple vector operations were then employed to further refine the final maps. This stepwise approach resulted in detection of 84%, 100%, and 97% of manually labeled buildings, at the 0.25, 0.5, and 0.75 percentiles of true building size, respectively, with very few false positives. The median absolute error in modeled building areas was 15%. This novel integration of deep learning, machine learning, and multi-modal remote sensing data was thus effective in detecting buildings over large scales and diverse landscapes, with potential applications in urban planning, resource management, and disaster response. The adaptable method presented here expands the range of techniques available for object detection in multi-modal remote sensing data and can be tailored to various kinds of input data, landscape types, and mapping goals.

Keywords