Mathematical Biosciences and Engineering (Jan 2024)
Multi-objective pedestrian tracking method based on YOLOv8 and improved DeepSORT
Abstract
A multi-objective pedestrian tracking method based on you only look once-v8 (YOLOv8) and the improved simple online and real time tracking with a deep association metric (DeepSORT) was proposed with the purpose of coping with the issues of local occlusion and ID dynamic transformation that frequently arise when tracking target pedestrians in real complex traffic scenarios. To begin with, in order to enhance the feature extraction network's capacity to learn target feature information in busy traffic situations, the detector implemented the YOLOv8 method with a high level of small-scale feature expression. In addition, the omni-scale network (OSNet) feature extraction network was then put on top of DeepSORT in order to accomplish real-time synchronized target tracking. This increases the effectiveness of picture edge recognition by dynamically fusing the collected feature information at various scales. Furthermore, a new adaptive forgetting smoothing Kalman filtering algorithm (FSA) was created to adapt to the nonlinear condition of the pedestrian trajectory in the traffic scene in order to address the issue of poor prediction attributed to the linear state equation of Kalman filtering once more. Afterward, the original intersection over union (IOU) association matching algorithm of DeepSORT was replaced by the complete-intersection over union (CIOU) association matching algorithm to fundamentally reduce the target pedestrians' omission and misdetection situation and to improve the accuracy of data matching. Eventually, the generalized trajectory feature extractor model (GFModel) was developed to tightly merge the local and global information through the average pooling operation in order to get precise tracking results and further decrease the impact of numerous disturbances on target tracking. The fusion algorithm of YOLOv8 and improved DeepSORT method based on OSNet, FSA and GFModel was named YOFGD. According to the experimental findings, YOFGD's ultimate accuracy can reach 77.9% and its speed can reach 55.8 frames per second (FPS), which is more than enough to fulfill the demands of real-world scenarios.
Keywords