Leida xuebao (Oct 2024)

Multidomain Characteristic-guided Multimodal Contrastive Recognition Method for Active Radar Jamming

  • Wenjie GUO,
  • Zhenhua WU,
  • Yice CAO,
  • Qiang ZHANG,
  • Lei ZHANG,
  • Lixia YANG

DOI
https://doi.org/10.12000/JR24129
Journal volume & issue
Vol. 13, no. 5
pp. 1004 – 1018

Abstract

Read online

Achieving robust joint utilization of multidomain characteristics and deep-network features while maintaining a high jamming-recognition accuracy with limited samples is challenging. To address this issue, this paper proposes a multidomain characteristic-guided multimodal contrastive recognition method for active radar jamming. This method involves first thoroughly extracting the multidomain characteristics of active jamming and then designing an optimization unit to automatically select effective characteristics and generate a text modality imbued with implicit expert knowledge. The text modality and involved time-frequency transformation image are separately fed into text and image encoders to construct multimodal-feature pairs and map them to a high-dimensional space for modal alignment. The text features are used as anchors and a guide to time-frequency image features for aggregation around the anchors through contrastive learning, optimizing the image encoder’s representation capability, achieving tight intraclass and separated interclass distributions of active jamming. Experiments show that compared to existing methods, which involve directly combining multidomain characteristics and deep-network features, the proposed guided-joint method can achieve differential feature processing, thereby enhancing the discriminative and generalization capabilities of recognition features. Moreover, under extremely small-sample conditions (2~3 training samples for each type of jamming), the accuracy of our method is 9.84% higher than those of comparative methods, proving the effectiveness and robustness of the proposed method.

Keywords