Agriculture (Sep 2024)

Hybrid-AI and Model Ensembling to Exploit UAV-Based RGB Imagery: An Evaluation of Sorghum Crop’s Nitrogen Content

  • Hajar Hammouch,
  • Suchitra Patil,
  • Sunita Choudhary,
  • Mounim A. El-Yacoubi,
  • Jan Masner,
  • Jana Kholová,
  • Krithika Anbazhagan,
  • Jiří Vaněk,
  • Huafeng Qin,
  • Michal Stočes,
  • Hassan Berbia,
  • Adinarayana Jagarlapudi,
  • Magesh Chandramouli,
  • Srinivas Mamidi,
  • KVSV Prasad,
  • Rekha Baddam

DOI
https://doi.org/10.3390/agriculture14101682
Journal volume & issue
Vol. 14, no. 10
p. 1682

Abstract

Read online

Non-invasive crop analysis through image-based methods holds great promise for applications in plant research, yet accurate and robust trait inference from images remains a critical challenge. Our study investigates the potential of AI model ensembling and hybridization approaches to infer sorghum crop traits from RGB images generated via unmanned aerial vehicle (UAV). In our study, we cultivated 21 sorghum cultivars in two independent seasons (2021 and 2022) with a gradient of fertilizer and water inputs. We collected 470 ground-truth N measurements and captured corresponding RGB images with a drone-mounted camera. We computed five RGB vegetation indices, employed several ML models such as MLR, MLP, and various CNN architectures (season 2021), and compared their prediction accuracy for N-inference on the independent test set (season 2022). We assessed strategies that leveraged both deep and handcrafted features, namely hybridized and ensembled AI architectures. Our approach considered two different datasets collected during the two seasons (2021 and 2022), with the training set from the first season only. This allowed for testing of the models’ robustness, particularly their sensitivity to concept drifts, in the independent season (2022), which is fundamental for practical agriculture applications. Our findings underscore the superiority of hybrid and ensembled AI algorithms in these experiments. The MLP + CNN-VGG16 combination achieved the best accuracy (R2 = 0.733, MAE = 0.264 N% on an independent dataset). This study emphasized that carefully crafted AI-based models applied to RGB images can achieve robust trait prediction with accuracies comparable to the similar phenotyping tasks using more complex (multi- and hyper-spectral) sensors presented in the current literature.

Keywords