IEEE Access (Jan 2024)
Meta-GCANet: Meta-Graph Coupled Aware Network for Cross-Modality Person Re-Identification
Abstract
Cross-modality person re-identification plays a critical role in matching person identities between visible and infrared images. However, significant modality differences introduce challenges in accurate image matching. Existing methods for cross-modality re-identification typically focus on modality-shared features, but they often neglect modality-specific representations, which can hinder convergence speed and reduce re-identification accuracy. To address these issues, this paper proposes a novel Meta-Graph Coupled Aware Network (Meta-GCANet) within a meta-learning framework to improve the network’s cross-modality generalization ability. The network consists of two modules: the Multi-Scale Awareness (MSA) module and the Meta-Graph Coupling (M-GC) module. The MSA module enhances the discriminative power of cross-modality features by incorporating multi-scale heterogeneous features from both modalities as global context. It interacts with local unimodal features to reinforce mutual learning and constructs a relational knowledge graph for structural classification. To reduce the discrepancy in modality-shared discriminative features, the M-GC module leverages the similarity between cross-modality features to extract meta-positive and meta-negative samples. These samples are then stacked and interact with modality-shared discriminative feature nodes to construct a coupled node graph. To further enhance modality-shared representation differentiation, we introduce two loss functions: the Meta-Sample Coupling loss (MSCloss) and the Meta-Sample Testing loss (MTloss). These losses jointly constrain and adjust the coupling weights of feature maps, regulate the variation of modality features, and improve model generalization. As a result, the effectiveness and superiority of the proposed method is verified through experiments on three cross-modality datasets, SYSU-MM01, RegDB and LLCM.
Keywords