Sensors (Nov 2023)

Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian Detection

  • Qiangbo Zhang,
  • Yunxiang Liu,
  • Yu Zhang,
  • Ming Zong,
  • Jianlin Zhu

DOI
https://doi.org/10.3390/s23229089
Journal volume & issue
Vol. 23, no. 22
p. 9089

Abstract

Read online

Occluded pedestrian detection faces huge challenges. False positives and false negatives in crowd occlusion scenes will reduce the accuracy of occluded pedestrian detection. To overcome this problem, we proposed an improved you-only-look-once version 3 (YOLOv3) based on squeeze-and-excitation networks (SENet) and optimized generalized intersection over union (GIoU) loss for occluded pedestrian detection, namely YOLOv3-Occlusion (YOLOv3-Occ). The proposed network model considered incorporating squeeze-and-excitation networks (SENet) into YOLOv3, which assigned greater weights to the features of unobstructed parts of pedestrians to solve the problem of feature extraction against unsheltered parts. For the loss function, a new generalized intersection over unionintersection over groundtruth (GIoUIoG) loss was developed to ensure the areas of predicted frames of pedestrian invariant based on the GIoU loss, which tackled the problem of inaccurate positioning of pedestrians. The proposed method, YOLOv3-Occ, was validated on the CityPersons and COCO2014 datasets. Experimental results show the proposed method could obtain 1.2% MR−2 gains on the CityPersons dataset and 0.7% mAP@50 improvements on the COCO2014 dataset.

Keywords