智慧农业 (Mar 2024)

Zero-Shot Pest Identification Based on Generative Adversarial Networks and Visual-Semantic Alignment

  • LI Tianjun,
  • YANG Xinting,
  • CHEN Xiao,
  • HU Huan,
  • ZHOU Zijie,
  • LI Wenyong

DOI
https://doi.org/10.12133/j.smartag.SA202312014
Journal volume & issue
Vol. 6, no. 2
pp. 72 – 84

Abstract

Read online

ObjectiveAccurate identification of insect pests is crucial for the effective prevention and control of crop infestations. However, existing pest identification methods primarily rely on traditional machine learning or deep learning techniques that are trained on seen classes. These methods falter when they encounter unseen pest species not included in the training set, due to the absence of image samples. An innovative method was proposed to address the zero-shot recognition challenge for pests.MethodsThe novel zero-shot learning (ZSL) method proposed in this study was capable of identifying unseen pest species. First, a comprehensive pest image dataset was assembled, sourced from field photography conducted around Beijing over several years, and from web crawling. The final dataset consisted of 2 000 images across 20 classes of adult Lepidoptera insects, with 100 images per class. During data preprocessing, a semantic dataset was manually curated by defining attributes related to color, pattern, size, and shape for six parts: antennae, back, tail, legs, wings, and overall appearance. Each image was annotated to form a 65-dimensional attribute vector for each class, resulting in a 20×65 semantic attribute matrix with rows representing each class and columns representing attribute values. Subsequently, 16 classes were designated as seen classes, and 4 as unseen classes. Next, a novel zero-shot pest recognition method was proposed, focusing on synthesizing high-quality pseudo-visual features aligned with semantic information using a generator. The wasserstein generative adversarial networks (WGAN) architecture was strategically employed as the fundamental network backbone. Conventional generative adversarial networks (GANs) have been known to suffer from training instabilities, mode collapse, and convergence issues, which can severely hinder their performance and applicability. The WGAN architecture addresses these inherent limitations through a principled reformulation of the objective function. In the proposed method, the contrastive module was designed to capture highly discriminative visual features that could effectively distinguish between different insect classes. It operated by creating positive and negative pairs of instances within a batch. Positive pairs consisted of different views of the same class, while negative pairs were formed from instances belonging to different classes. The contrastive loss function encouraged the learned representations of positive pairs to be similar while pushing the representations of negative pairs apart. Tightly integrated with the WGAN structure, this module substantially improved the generation quality of the generator. Furthermore, the visual-semantic alignment module enforced consistency constraints from both visual and semantic perspectives. This module constructed a cross-modal embedding space, mapping visual and semantic features via two projection layers: One for mapping visual features into the cross-modal space, and another for mapping semantic features. The visual projection layer took the synthesized pseudo-visual features from the generator as input, while the semantic projection layer ingested the class-level semantic vectors. Within this cross-modal embedding space, the module enforced two key constraints: Maximizing the similarity between same-class visual-semantic pairs and minimizing the similarity between different-class pairs. This was achieved through a carefully designed loss function that encourages the projected visual and semantic representations to be closely aligned for instances belonging to the same class, while pushing apart the representations of different classes. The visual-semantic alignment module acted as a regularizer, preventing the generator from producing features that deviated from the desired semantic information. This regularization effect complemented the discriminative power gained from the contrastive module, resulting in a generator that produces high-quality, diverse, and semantically aligned pseudo-visual features.Results and DiscussionsThe proposed method was evaluated on several popular ZSL benchmarks, including CUB, AWA, FLO, and SUN. The results demonstrated that the proposed method achieved state-of-the-art performance across these datasets, with a maximum improvement of 2.8% over the previous best method, CE-GZSL. This outcome fully demonstrated the method's broad effectiveness in different benchmarks and its outstanding generalization ability. On the self-constructed 20-class insect dataset, the method also exhibited exceptional recognition accuracy. Under the standard ZSL setting, it achieved a precise recognition rate of 77.4%, outperforming CE-GZSL by 2.1%. Under the generalized ZSL setting, it achieved a harmonic mean accuracy of 78.3%, making a notable 1.2% improvement. This metric provided a balanced assessment of the model's performance across seen and unseen classes, ensuring that high accuracy on unseen classes does not come at the cost of forgetting seen classes. These results on the pest dataset, coupled with the performance on public benchmarks, firmly validated the effectiveness of the proposed method.ConclusionsThe proposed zero-shot pest recognition method represents a step forward in addressing the challenges of pest management. It effectively generalized pest visual features to unseen classes, enabling zero-shot pest recognition. It can facilitate pests identification tasks that lack training samples, thereby assisting in the discovery and prevention of novel crop pests. Future research will focus on expanding the range of pest species to further enhance the model's practical applicability.

Keywords