IEEE Access (Jan 2022)

Identification and Classification of Mechanical Damage During Continuous Harvesting of Root Crops Using Computer Vision Methods

  • Aleksey Osipov,
  • Vyacheslav Shumaev,
  • Adam Ekielski,
  • Timur Gataullin,
  • Stanislav Suvorov,
  • Sergey Mishurov,
  • Sergey Gataullin

DOI
https://doi.org/10.1109/ACCESS.2022.3157619
Journal volume & issue
Vol. 10
pp. 28885 – 28894

Abstract

Read online

Detecting sugar beetroot crops with mechanical damage using machine learning methods is necessary for fine-tuning beet harvester units. The Agrifac HEXX TRAXX harvester with an installed computer vision system was investigated. A video camera (24 fps) was installed above the turbine, which receives the dug-out beets after the digger and is connected to a single-board computer. At the preprocessing stage, static and insignificant image details were revealed. Canny edge detector and excess green minus excess red (ExGR) method were used. The identified areas were excluded from the image. The remaining areas were glued with similar areas of another image. As a result, the number of images entering the second stage of preprocessing was reduced by half. Then Otsu’s binarization was used. The main stage of image processing is divided into two sub-stages: detection and classification. The improved YOLOv4-tiny method was chosen for root crop detection using a single-board computer (SBC). This method allows processing up to 14 images of 416 $\times416$ pixels with 86% precision and 91% recall. To classify root crop damage, we considered two algorithms as candidates: 1. bag of visual words (BoVW) with a support vector machine (SVM) classifier using histogram of oriented gradients (HOG) and scale-invariant feature transform (SIFT) descriptors; 2. convolutional neural networks (CNN). Under normal lighting conditions, CNN showed the best accuracy, which was 99%. The implemented methods were used to detect and classify blurred images of sugar beetroots, which were previously rejected. For improved YOLOv4-tiny precision was 74% and recall was 70%. CNN classification accuracy was 92.6%.

Keywords