IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2022)

Finding Nonrigid Tiny Person With Densely Cropped and Local Attention Object Detector Networks in Low-Altitude Aerial Images

  • Xiangqing Zhang,
  • Yan Feng,
  • Shun Zhang,
  • Nan Wang,
  • Shaohui Mei

DOI
https://doi.org/10.1109/JSTARS.2022.3175498
Journal volume & issue
Vol. 15
pp. 4371 – 4385

Abstract

Read online

Finding tiny persons under the drone vision was, is, and remains to be an integral and challenging task. Unmanned aerial vehicles (UAVs) with high-speed, low-altitude, and multi-perspective flight bring about violently various scales of objects, which burdens the optimization of models. Moreover, the detection performance of densely and faintly discernible person characteristics is far less than that of large objects in high-resolution aerial images. In this article, we introduce the image cropping strategy and attention mechanism based on YOLOv5 to address small person detection in the optimized VisDrone2019 dataset. Specifically, we propose a Densely Cropped and Local Attention of object detector Network (DCLANet), which is inspired by the observation that less area occupied by small objects should be fully focused and relatively magnified in the original image. DCLANet-assembled Density Map-Guided Object Detection (DMNet) in aerial images and You Only Look Twice (YOLT): Rapid Multiscale Object Detection In Satellite Imagery to crop images upon training and testing stage, meanwhile, added bottleneck attention mechanism to YOLOv5 baseline framework, which more focus on person objects other than irrelevant categories. To achieve further improvement of DCLANet, we also provide bags of useful strategies: data augmentation, label fusion, category filtering, and hyperparameter evolution. Extensive experiments on the VisDrone2019 show that DCLANet achieves state-of-the-art performanc; the detection result of person category $A P^{\text{val }}[email protected] is 50.04% with test-dev subset, which is substantially better than the previous SOTA method (DPNetV3) by 12.01%. In addition, on our optimized VisDrone2019 dataset, $A P^{\text{val }}[email protected] and $A P^{\text{test }}[email protected] obtained 74.95% and 62.18%, respectively. Compared to YOLOv5, DCLANet improves 3.8% or so, which is encouraging and competitive.

Keywords