Heliyon (Oct 2024)

Based on improved joint detection and tracking of UAV for multi-target detection of livestock

  • Peng Shen,
  • Fulong Wang,
  • Wei Luo,
  • Yongxiang Zhao,
  • Lin Li,
  • Guoqing Zhang,
  • Yuchen Zhu

Journal volume & issue
Vol. 10, no. 19
p. e38316

Abstract

Read online

In agriculture, specifically livestock monitoring, drones' ability to track multiple targets is essential for advancing the field. However, limited computing resources and unpredictable drone movements often cause issues like ambiguous video frames, object obstructions, and size deviations. These inconsistencies reduce tracking accuracy, making traditional algorithms inadequate for handling drone footage. This study introduces an enhanced deep learning-based multi-target drone tracker framework that enables real-time processing. The proposed method combines object detection and tracking by leveraging consecutive frame pairs to extract and share features, enhancing computational efficiency. It employs diverse loss functions to address class and sample distribution imbalances and includes a composite deblurring module to enhance detection accuracy. Object association utilizes a dual regress bounding box technique, aiding in object identification verification and predictive motion. Live tracking is achieved by predicting object locations in subsequent frames, enabling real-time tracking. Evaluation against leading benchmarks shows that the system improves precision and speed, achieving a 4.3 % increase in Multi-Object Tracking Accuracy (MOTA) and a 7.7 % boost in F1 score.

Keywords