Artificial Intelligence in Agriculture (Jan 2022)
A study on deep learning algorithm performance on weed and crop species identification under different image background
Abstract
Weed identification is fundamental toward developing a deep learning-based weed control system. Deep learning algorithms assist to build a weed detection model by using weed and crop images. The dynamic environmental conditions such as ambient lighting, moving cameras, or varying image backgrounds could affect the performance of deep learning algorithms. There are limited studies on how the different image backgrounds would impact the deep learning algorithms for weed identification. The objective of this research was to test deep learning weed identification model performance in images with potting mix (non-uniform) and black pebbled (uniform) backgrounds interchangeably. The weed and crop images were acquired by four canon digital cameras in the greenhouse with both uniform and non-uniform background conditions. A Convolutional Neural Network (CNN), Visual Group Geometry (VGG16), and Residual Network (ResNet50) deep learning architectures were used to build weed classification models. The model built from uniform background images was tested on images with a non-uniform background, as well as model built from non-uniform background images was tested on images with uniform background. Results showed that the VGG16 and ResNet50 models built from non-uniform background images were evaluated on the uniform background, achieving models' performance with an average f1-score of 82.75% and 75%, respectively. Conversely, the VGG16 and ResNet50 models built from uniform background images were evaluated on the non-uniform background images, achieving models' performance with an average f1-score of 77.5% and 68.4% respectively. Both the VGG16 and ResNet50 models' performances were improved with average f1-score values between 92% and 99% when both uniform and non-uniform background images were used to build the model. It appears that the model performances are reduced when they are tested with images that have different object background than the ones used for building the model.