Remote Sensing (Jun 2021)

A Quantitative Validation of Multi-Modal Image Fusion and Segmentation for Object Detection and Tracking

  • Nicholas LaHaye,
  • Michael J. Garay,
  • Brian D. Bue,
  • Hesham El-Askary,
  • Erik Linstead

DOI
https://doi.org/10.3390/rs13122364
Journal volume & issue
Vol. 13, no. 12
p. 2364

Abstract

Read online

In previous works, we have shown the efficacy of using Deep Belief Networks, paired with clustering, to identify distinct classes of objects within remotely sensed data via cluster analysis and qualitative analysis of the output data in comparison with reference data. In this paper, we quantitatively validate the methodology against datasets currently being generated and used within the remote sensing community, as well as show the capabilities and benefits of the data fusion methodologies used. The experiments run take the output of our unsupervised fusion and segmentation methodology and map them to various labeled datasets at different levels of global coverage and granularity in order to test our models’ capabilities to represent structure at finer and broader scales, using many different kinds of instrumentation, that can be fused when applicable. In all cases tested, our models show a strong ability to segment the objects within input scenes, use multiple datasets fused together where appropriate to improve results, and, at times, outperform the pre-existing datasets. The success here will allow this methodology to be used within use concrete cases and become the basis for future dynamic object tracking across datasets from various remote sensing instruments.

Keywords