IEEE Access (Jan 2024)

Deep Self-Supervised Hashing With Fine-Grained Similarity Mining for Cross-Modal Retrieval

  • Lijun Han,
  • Renlin Wang,
  • Chunlei Chen,
  • Huihui Zhang,
  • Yujie Zhang,
  • Wenfeng Zhang

DOI
https://doi.org/10.1109/ACCESS.2024.3371173
Journal volume & issue
Vol. 12
pp. 31756 – 31770

Abstract

Read online

With the efficiency of storage and retrieval speed, the hashing methods have attracted a lot of attention for cross-modal retrieval applications. In contrast to traditional cross-modal hashing by using handcrafted features, deep cross-modal hashing integrates the advantages of deep learning and hashing methods to encode raw multimodal data into compact binary codes with semantic information preserved. Generally speaking, most of the existing deep cross-modal hashing methods simply define the semantic similarity between heterogeneous modalities by counting the number of shared semantic labels (such as, two samples share at least one label, they are similar, otherwise they are dissimilar), which fails to represent the accurate multi-label semantic relations between heterogeneous data. In this paper, we propose a new Deep Self-supervised Hashing with Fine-grained Similarity Mining (DSH-FSM) framework to efficiently preserve the fine-grained multi-label semantic similarity, learning a highly separable embedding space. Specifically, by employing an asymmetric guidance strategy, a novel Semantic-Network is introduced into cross-modal hashing to learn two semantic dictionaries, including the semantic feature dictionary and the semantic code dictionary, which guides the Image-Network and the Text-Network to capture multi-label semantic relevance across different modalities. Based on the obtained semantic dictionary, an asymmetric margin-scalable loss is proposed to obtain fine-grained pair-wise similarity information, which could contribute to the production of similarity-preserving and discriminative binary codes. Besides, two feature extractors with transformer encoders are designed to achieve the Image-Network and Text-Network, which could extract the representative semantic characteristics from raw heterogeneous samples. Extensive experimental results on various benchmark datasets show that our proposed DSH-FSM framework achieves state-of-the-art cross-modal similarity search performance. Compared to the state-of-the-art methods, the results of mAP are significantly improved by 1.9%, 9.1%, and 9.8%, respectively, on the three widely used datasets.

Keywords