Geosciences (Oct 2022)

Comparing the Accuracy of sUAS Navigation, Image Co-Registration and CNN-Based Damage Detection between Traditional and Repeat Station Imaging

  • Andrew C. Loerch,
  • Douglas A. Stow,
  • Lloyd L. Coulter,
  • Atsushi Nara,
  • James Frew

DOI
https://doi.org/10.3390/geosciences12110401
Journal volume & issue
Vol. 12, no. 11
p. 401

Abstract

Read online

The application of ultra-high spatial resolution imagery from small unpiloted aerial systems (sUAS) can provide valuable information about the status of built infrastructure following natural disasters. This study employs three methods for improving the value of sUAS imagery: (1) repeating the positioning of image stations over time using a bi-temporal imaging approach called repeat station imaging (RSI) (compared here against traditional (non-RSI) imaging), (2) co-registration of bi-temporal image pairs, and (3) damage detection using Mask R-CNN, a convolutional neural network (CNN) algorithm applied to co-registered image pairs. Infrastructure features included roads, buildings, and bridges, with simulated cracks representing damage. The accuracies of platform navigation and camera station positioning, image co-registration, and resultant Mask R-CNN damage detection were assessed for image pairs, derived with RSI and non-RSI acquisition. In all cases, the RSI approach yielded the highest accuracies, with repeated sUAS navigation accuracy within 0.16 m mean absolute error (MAE) horizontally and vertically, image co-registration accuracy of 2.2 pixels MAE, and damage detection accuracy of 83.7% mean intersection over union.

Keywords