Frontiers in Plant Science (Dec 2022)

Inter-row information recognition of maize in the middle and late stages via LiDAR supplementary vision

  • Zhiqiang Li,
  • Zhiqiang Li,
  • Dongbo Xie,
  • Dongbo Xie,
  • Lichao Liu,
  • Lichao Liu,
  • Hai Wang,
  • Hai Wang,
  • Liqing Chen,
  • Liqing Chen

DOI
https://doi.org/10.3389/fpls.2022.1024360
Journal volume & issue
Vol. 13

Abstract

Read online

In the middle and late stages of maize, light is limited and non-maize obstacles exist. When a plant protection robot uses the traditional visual navigation method to obtain navigation information, some information will be missing. Therefore, this paper proposed a method using LiDAR (laser imaging, detection and ranging) point cloud data to supplement machine vision data for recognizing inter-row information in the middle and late stages of maize. Firstly, we improved the YOLOv5 (You Only Look Once, version 5) algorithm based on the characteristics of the actual maize inter-row environment in the middle and late stages by introducing MobileNetv2 and ECANet. Compared with that of YOLOv5, the frame rate of the improved YOLOv5 (Im-YOLOv5) increased by 17.91% and the weight size decreased by 55.56% when the average accuracy was reduced by only 0.35%, improving the detection performance and shortening the time of model reasoning. Secondly, we identified obstacles (such as stones and clods) between the rows using the LiDAR point cloud data to obtain auxiliary navigation information. Thirdly, the auxiliary navigation information was used to supplement the visual information, so that not only the recognition accuracy of the inter-row navigation information in the middle and late stages of maize was improved but also the basis of the stable and efficient operation of the inter-row plant protection robot was provided for these stages. The experimental results from a data acquisition robot equipped with a camera and a LiDAR sensor are presented to show the efficacy and remarkable performance of the proposed method.

Keywords