Remote Sensing (Sep 2018)

Supervised Classification of Multisensor Remotely Sensed Images Using a Deep Learning Framework

  • Sankaranarayanan Piramanayagam,
  • Eli Saber,
  • Wade Schwartzkopf,
  • Frederick W. Koehler

DOI
https://doi.org/10.3390/rs10091429
Journal volume & issue
Vol. 10, no. 9
p. 1429

Abstract

Read online

In this paper, we present a convolutional neural network (CNN)-based method to efficiently combine information from multisensor remotely sensed images for pixel-wise semantic classification. The CNN features obtained from multiple spectral bands are fused at the initial layers of deep neural networks as opposed to final layers. The early fusion architecture has fewer parameters and thereby reduces the computational time and GPU memory during training and inference. We also propose a composite fusion architecture that fuses features throughout the network. The methods were validated on four different datasets: ISPRS Potsdam, Vaihingen, IEEE Zeebruges and Sentinel-1, Sentinel-2 dataset. For the Sentinel-1,-2 datasets, we obtain the ground truth labels for three classes from OpenStreetMap. Results on all the images show early fusion, specifically after layer three of the network, achieves results similar to or better than a decision level fusion mechanism. The performance of the proposed architecture is also on par with the state-of-the-art results.

Keywords