Agronomy (Aug 2024)

Image Segmentation-Based Oilseed Rape Row Detection for Infield Navigation of Agri-Robot

  • Guoxu Li,
  • Feixiang Le,
  • Shuning Si,
  • Longfei Cui,
  • Xinyu Xue

DOI
https://doi.org/10.3390/agronomy14091886
Journal volume & issue
Vol. 14, no. 9
p. 1886

Abstract

Read online

The segmentation and extraction of oilseed rape crop rows are crucial steps in visual navigation line extraction. Agricultural autonomous navigation robots face challenges in path recognition in field environments due to factors such as complex crop backgrounds and varying light intensities, resulting in poor segmentation and slow detection of navigation lines in oilseed rape crops. Therefore, this paper proposes VC-UNet, a lightweight semantic segmentation model that enhances the U-Net model. Specifically, VGG16 replaces the original backbone feature extraction network of U-Net, Convolutional Block Attention Module (CBAM) are integrated at the upsampling stage to enhance focus on segmentation targets. Furthermore, channel pruning of network convolution layers is employed to optimize and accelerate the model. The crop row trapezoidal ROI regions are delineated using end-to-end vertical projection methods with serialized region thresholds. Then, the centerline of oilseed rape crop rows is fitted using the least squares method. Experimental results demonstrate an average accuracy of 94.11% for the model and an image processing speed of 24.47 fps/s. After transfer learning for soybean and maize crop rows, the average accuracy reaches 91.57%, indicating strong model robustness. The average yaw angle deviation of navigation line extraction is 3.76°, with a pixel average offset of 6.13 pixels. Single image transmission time is 0.009 s, ensuring real-time detection of navigation lines. This study provides upper-level technical support for the deployment of agricultural robots in field trials.

Keywords