IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2021)

YOLOrs: Object Detection in Multimodal Remote Sensing Imagery

  • Manish Sharma,
  • Mayur Dhanaraj,
  • Srivallabha Karnam,
  • Dimitris G. Chachlakis,
  • Raymond Ptucha,
  • Panos P. Markopoulos,
  • Eli Saber

DOI
https://doi.org/10.1109/JSTARS.2020.3041316
Journal volume & issue
Vol. 14
pp. 1497 – 1508

Abstract

Read online

Deep-learning object detection methods that are designed for computer vision applications tend to underperform when applied to remote sensing data. This is because contrary to computer vision, in remote sensing, training data are harder to collect and targets can be very small, occupying only a few pixels in the entire image, and exhibit arbitrary perspective transformations. Detection performance can improve by fusing data from multiple remote sensing modalities, including red, green, blue, infrared, hyperspectral, multispectral, synthetic aperture radar, and light detection and ranging, to name a few. In this article, we propose YOLOrs: a new convolutional neural network, specifically designed for real-time object detection in multimodal remote sensing imagery. YOLOrs can detect objects at multiple scales, with smaller receptive fields to account for small targets, as well as predict target orientations. In addition, YOLOrs introduces a novel mid-level fusion architecture that renders it applicable to multimodal aerial imagery. Our experimental studies compare YOLOrs with contemporary alternatives and corroborate its merits.

Keywords