Plant Phenome Journal (Dec 2023)

Self‐supervised learning improves classification of agriculturally important insect pests in plants

  • Soumyashree Kar,
  • Koushik Nagasubramanian,
  • Dinakaran Elango,
  • Matthew E. Carroll,
  • Craig A. Abel,
  • Ajay Nair,
  • Daren S. Mueller,
  • Matthew E. O'Neal,
  • Asheesh K. Singh,
  • Soumik Sarkar,
  • Baskar Ganapathysubramanian,
  • Arti Singh

DOI
https://doi.org/10.1002/ppj2.20079
Journal volume & issue
Vol. 6, no. 1
pp. n/a – n/a

Abstract

Read online

Abstract Insect pests cause significant damage to food production, so early detection and efficient mitigation strategies are crucial. There is a continual shift toward machine learning (ML)‐based approaches for automating agricultural pest detection. Although supervised learning has achieved remarkable progress in this regard, it is impeded by the need for significant expert involvement in labeling the data used for model training. This makes real‐world applications tedious and oftentimes infeasible. Recently, self‐supervised learning (SSL) approaches have provided a viable alternative to training ML models with minimal annotations. Here, we present an SSL approach to classify 22 insect pests. The framework was assessed on raw and segmented field‐captured images using three different SSL methods, Nearest Neighbor Contrastive Learning of Visual Representations (NNCLR), Bootstrap Your Own Latent, and Barlow Twins. SSL pre‐training was done on ResNet‐18 and ResNet‐50 models using all three SSL methods on the original RGB images and foreground segmented images. The performance of SSL pre‐training methods was evaluated using linear probing of SSL representations and end‐to‐end fine‐tuning approaches. The SSL‐pre‐trained convolutional neural network models were able to perform annotation‐efficient classification. NNCLR was the best performing SSL method for both linear and full model fine‐tuning. With just 5% annotated images, transfer learning with ImageNet initialization obtained 74% accuracy, whereas NNCLR achieved an improved classification accuracy of 79% for end‐to‐end fine‐tuning. Models created using SSL pre‐training consistently performed better, especially under very low annotation, and were robust to object class imbalances. These approaches help overcome annotation bottlenecks and are resource efficient.