Systems Science & Control Engineering (Dec 2024)
Advanced segmentation models for automated capsicum peduncle detection in night-time greenhouse environments
Abstract
This research addresses challenges in capsicum peduncle detection in night-time greenhouse environments, including low light, uneven illumination, and shadows, using advanced computer vision models. A dataset of 200 images was curated, capturing diverse distances, heights, occlusion levels, and lighting conditions, and was rigorously pre-processed and augmented. Two YOLOv9 instance segmentation variants, YOLOv9c-seg and YOLOv9e-seg, were custom-trained and fine-tuned using Google Colaboratory. YOLOv9c-seg (56.3 MB) achieved superior mean Average Precision (mAP) scores of 0.751 (box) and 0.725 (mask), outperforming YOLOv9e-seg (121.9 MB) with mAP scores of 0.674 (box) and 0.658 (mask). Grounded SAM, a zero-shot segmentation model, achieved maximum peduncle detection confidences of 59% and 49% with positional prompts. Comparative testing on 50 images containing 70 capsicums showed YOLOv9c-seg achieving mean precision, recall, and F1-scores of 0.93, 0.86, and 0.89, respectively, outperforming Grounded SAM (0.86, 0.70, and 0.77). This study highlights the efficacy of single-shot versus zero-shot segmentation models for automated capsicum peduncle detection in controlled agricultural environments, offering insights into model performance and future research directions for model optimization and dataset expansion.
Keywords