IEEE Access (Jan 2024)
PeachYOLO: A Lightweight Algorithm for Peach Detection in Complex Orchard Environments
Abstract
Precise fruit recognition is crucial for the automated picking of peaches. However, practical implementation encounters challenges, including high costs and low efficiency, which hinder the commercialization of picking robots. To tackle these challenges, this study establishes a synthetic peach dataset and introduces PeachYOLO, an efficient and lightweight model for peach object detection in complex orchard environments. Specifically, based on the lightweight object detection model You Only Look Once version 8 (YOLOv8), this study first replaces traditional convolutions in the detection head structure with Partial Convolution (PConv). This improvement reduces computational and memory requirements while effectively extracting spatial features. Secondly, within the feature output of the neck network, Deformable Convolutional Networks version 2 (DCNv2) is employed in place of traditional convolutions to improve the recognition of irregular targets. Finally, Coordinate Attention (CA) is integrated into the head network to focus precisely on essential image information. Experimental results demonstrate that PeachYOLO achieves a mAP of 93.8%, surpassing the original model by 1.0%. Furthermore, PeachYOLO’s computation is only 5.1 FLOPs (G), the number of parameters is 2.6M, and has an inference time of 1.9ms, which is a reduction of 37.0%, 13.6%, and 5.6%, respectively, compared with the original YOLOv8n algorithm. These results underscore the substantial improvements in detection speed, accuracy, and model size offered by PeachYOLO. Moreover, its suitability for peach detection in intricate orchard settings lays the groundwork for the realization of unmanned intelligent peach picking.
Keywords