IEEE Access (Jan 2024)

Orchard Vision Navigation Line Extraction Based on YOLOv8-Trunk Detection

  • Ziang Cao,
  • Changzhi Gong,
  • Junjie Meng,
  • Lu Liu,
  • Yuan Rao,
  • Wenhui Hou

DOI
https://doi.org/10.1109/ACCESS.2024.3422422
Journal volume & issue
Vol. 12
pp. 104126 – 104137

Abstract

Read online

Visual navigation is the pivotal technology for enabling autonomous operations of orchard robots. To obtain orchard navigation lines, the robot needs to quickly identify the positions of tree trunks. For this, we proposed a detection model called YOLOv8-Trunk in this study. Based on the detection results of vine tree trunks by YOLOv8-Trunk, the network generates a series of center point coordinates at the bottom of the detection boxes. Subsequently, the least square method is employed to fit reference lines on both sides of the trunk, thereby determining the navigation path for the orchard robot. To enhance the focus on the target, an efficient multi-scale attention (EMA) mechanism is introduced into traditional YOLOv8 network. On the data level, we adopted a novel Mix-Shelter method to augment the datasets for training the detection model, thereby bolstering the robustness. In addition, we also explored the impact of loss functions and optimizers on the performance of the detection model. A comprehensive set of ablation and comparison experiments is conducted in this study. The experimental results affirm that the YOLOv8-Trunk network adeptly detects vine tree trunks, achieving an accuracy rate of 92.7%. The obtained navigation path based on the detect result is reliable. This study provides valuable reference for the realization of intelligent inspection in orchards.

Keywords