IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)
A Novel Fusion Method for Soybean Yield Prediction Using Sentinel-2 and PlanetScope Imagery
Abstract
This study aimed to develop a new method for combining Sentinel-2 and PlanetScope (PS) imagery. The normalized difference vegetation indices (NDVI) data were retrieved from the Earth observation satellites S2 Level-2A and PS Level-3 surface reflectance during the soybean growing phase. The proposed method utilizes the Python implementation of data mining sharpener algorithm, which is a decision-tree-based technique for enhancing low- and high-resolution images with information from large-scale images. The robustness and flexibility of a multidimensional data fusion, deep neural network, and machine-learning-based yield estimation model were analyzed based on the within-field variability in soybean yield. A comparative analysis revealed that the fusion data with 1.5–2.5 tons significantly outperformed individual predictions, demonstrating higher accuracy and fewer errors. The fusion data used to predict yields showed relatively small error ranges of 0.5–0.2 t/ha. In contrast, the PS and S2 datasets showed higher prediction errors. The study employed vegetation indices, and during validation, crop forecasts were compared using an NDVI map. The effectiveness of artificial neural networks in predicting crop yields was highlighted, demonstrating superior performance across diverse datasets compared with other algorithms. This novel fusion technique is essential for monitoring crop health and growth, improving agricultural practices, such as fertilization and water management, and improving yield forecast accuracy. This study provides valuable insights into phenology monitoring, image fusion accuracy, and the effectiveness of machine-learning algorithms in predicting crop yields, emphasizing the benefits of fused imagery.
Keywords