IEEE Access (Jan 2024)

Multi-Head Self-Attention-Incorporated YOLOv5s for Satellites Detection

  • Yufei Liu,
  • Weijie Wang,
  • Ruida Ye,
  • Yuan Ren,
  • Lifen Wang,
  • Weikun Pang

DOI
https://doi.org/10.1109/ACCESS.2024.3490376
Journal volume & issue
Vol. 12
pp. 167530 – 167541

Abstract

Read online

To address challenges associated with detecting objects in images for in-orbit services, often involving changes in the object scale, we proposed a method for detecting satellites using YOLOv5s with a multi-head self-attention mechanism MHSA-YOLOv5s. After mosaic data enhancement and the initial anchor frame reset, images of satellites were input into the YOLOV5s network incorporating the multi-head self-attention mechanism. Features extracted from the C3 module of the backbone network were spliced with features after down-sampling in the Neck module, after which the multi-head self-attention module was added to enhance the deep and shallow feature fusion capability of the model. A comprehensive loss function CIoU (complete intersection over union) was built combining a bounding box, category and confidence. We designed three different scale detection heads to cope with multi-scale object detection. Experiments were conducted on the SPEED+ dataset and SPARK dataset. On the SPARK dataset, compared with the original YOLOV5s algorithm, the precision, recall and mAP of the MHSA-YOLOv5s algorithm are increased by 1.6%, 3.3% and 1.1%, respectively. The method converged faster and also had higher accuracy of multi-scale object detection within complex backgrounds and different perspectives, which proves that it increases the effectiveness and possibility of being deployed in space applications.

Keywords