Sensors (Sep 2020)

Autonomous Crop Row Guidance Using Adaptive Multi-ROI in Strawberry Fields

  • Vignesh Raja Ponnambalam,
  • Marianne Bakken,
  • Richard J. D. Moore,
  • Jon Glenn Omholt Gjevestad,
  • Pål Johan From

DOI
https://doi.org/10.3390/s20185249
Journal volume & issue
Vol. 20, no. 18
p. 5249

Abstract

Read online

Automated robotic platforms are an important part of precision agriculture solutions for sustainable food production. Agri-robots require robust and accurate guidance systems in order to navigate between crops and to and from their base station. Onboard sensors such as machine vision cameras offer a flexible guidance alternative to more expensive solutions for structured environments such as scanning lidar or RTK-GNSS. The main challenges for visual crop row guidance are the dramatic differences in appearance of crops between farms and throughout the season and the variations in crop spacing and contours of the crop rows. Here we present a visual guidance pipeline for an agri-robot operating in strawberry fields in Norway that is based on semantic segmentation with a convolution neural network (CNN) to segment input RGB images into crop and not-crop (i.e., drivable terrain) regions. To handle the uneven contours of crop rows in Norway’s hilly agricultural regions, we develop a new adaptive multi-ROI method for fitting trajectories to the drivable regions. We test our approach in open-loop trials with a real agri-robot operating in the field and show that our approach compares favourably to other traditional guidance approaches.

Keywords