Plant Phenome Journal (Dec 2024)

Manifold and spatiotemporal learning on multispectral unoccupied aerial system imagery for phenotype prediction

  • Fared Farag,
  • Trevis D. Huggins,
  • Jeremy D. Edwards,
  • Anna M. McClung,
  • Ahmed A. Hashem,
  • Jason L. Causey,
  • Emily S. Bellis

DOI
https://doi.org/10.1002/ppj2.70006
Journal volume & issue
Vol. 7, no. 1
pp. n/a – n/a

Abstract

Read online

Abstract Timeseries data captured by unoccupied aircraft systems (UASs) are increasingly used for agricultural applications requiring accurate prediction of plant phenotypes from remotely sensed imagery. However, prediction models often fail to generalize well from one year to the next or to new environments. Here, we investigate the ability of various machine learning (ML) approaches to improve yield prediction accuracy in new environments from multispectral timeseries imagery acquired on a set of rice (Oryza sativa L.) experiments with different management treatments and varieties. We also trained deep learning models that perform automated feature extraction and compared these against a suite of other approaches. We observed similar performance on a held‐out growing season for a spatiotemporal model (a three‐dimensional convolutional neural network) trained on raw images compared to simpler workflows using dimension reduction of manually extracted features from temporal imagery (i.e., vegetation indices and image texture properties). Manifold learning on raw imagery was better suited for the prediction of phenological traits due to the preservation of local structure in image embeddings at some time points. Together, these results highlight the competitiveness of classical ML approaches for UAS image analysis alongside computationally expensive deep learning models. Along with a new benchmark dataset for rice, our results help extend the toolkit for UAS image analysis, contributing to improved phenotype prediction in plant breeding and precision agriculture applications.