IEEE Access (Jan 2019)

Vision-Based Flying Targets Detection via Spatiotemporal Context Fusion

  • Yunfeng Cao,
  • Zhouyu Zhang,
  • Yanming Fan,
  • Meng Ding,
  • Jiang Tao

DOI
https://doi.org/10.1109/ACCESS.2019.2943068
Journal volume & issue
Vol. 7
pp. 144090 – 144100

Abstract

Read online

Deriving from the imperative necessities for developing Sense and Avoid (SAA) capability of Unmanned Aerial Vehicle (UAV), a newly designed flying targets detection algorithm is presented in this paper for enhancing the UAV environment perception ability. Since spatiotemporal context is crucial for insuring the effectiveness of flying targets detection, the algorithm is constructed on the basis of spatiotemporal context fusion. The algorithm proposed in this paper contains three parts, namely the spatial context extraction, temporal context extraction and spatiotemporal context fusion. 1) In order to extract spatial context, dense sampling method is firstly applied to obtain dense image grids, then spatial context is generated via pre-learned conditional random field (CRF) model using a layered structure: dense image patches, bottom feature descriptors, sparse codes, and predicted CRF labels. 2) In order to extract temporal context, the forward and back motion history image (FBMHI) is firstly computed for detecting motion cues, and the adaptive foreground and background isolation is further adopted for acquiring the temporal probability map. 3) The presence probability map of flying targets is finally obtained by spatiotemporal context fusion, and flying targets is therefore picked out by analyzing fused presence probability map. A set of videos containing different drone models are selected for evaluation, and the comparisons against other algorithms demonstrate superiority of the proposed algorithm.

Keywords