Shanghai Jiaotong Daxue xuebao (Nov 2024)

Airfield Multi-Scale Object Detection for Visual Navigation in Civil Aircraft

  • ZHANG Tao, ZHANG Xuerui, CHEN Yong, ZHONG Kelin, LUO Qijun

DOI
https://doi.org/10.16183/j.cnki.jsjtu.2024.206
Journal volume & issue
Vol. 58, no. 11
pp. 1816 – 1825

Abstract

Read online

The visual assistance driving system for civil aviation aircraft captures information about the surrounding threat scenario using airborne visual sensors, providing pilots with additional information to aid decision-making. However, the threat objects in the airfield obtained by the optical sensors on the airborne differ significantly in scale, and the computing capacity of the onboard platform is limited. Current methods for object detection do not meet the requirements for visual assistance in driving scenarios. To address this issue, a lightweight multi-scale object detection algorithm based on YOLOv5s is proposed. First, the CA-BIFPN feature fusion network is designed by combining the weighted bidirectional feature pyramid network (BIFPN) with the coordinate attention (CA) attention mechanism, which aims to enhance the feature expression of small objects and to improve the capacity of the model to learn multi-scale objects. Then, the GSConv decoupled detection head is designed to improve object detection accuracy by making classification and regression independent. To enhance the detection speed of the network and enable real-time detection of airfield objects, a cross-level partial lightweight neck module is designed to reduce the additional parameters introduced by the decoupled head. A self-built multi-scale airfield object dataset containing real-world and simulated data from airborne visual sensors from a civil aviation aircraft perspective is established to verify the performance of the proposed algorithm. The experiments conducted on this dataset demonstrate that the detection accuracy of the proposed algorithm surpasses that of faster R-CNN, SSD, and other classic multi-scale object detection algorithms like YOLOv6, YOLOv7, and YOLOX. The achieved mean average precision is 71.40%, which is 4.19% higher than that of YOLOv5s. Furthermore, the detection frame rate achieves 71 frame per second on the simulated airborne computing platform, which satisfies the real-time detection requirements.

Keywords