IEEE Access (Jan 2022)

Deep Label Feature Fusion Hashing for Cross-Modal Retrieval

  • Dongxiao Ren,
  • Weihua Xu,
  • Zhonghua Wang,
  • Qinxiu Sun

DOI
https://doi.org/10.1109/ACCESS.2022.3208147
Journal volume & issue
Vol. 10
pp. 100276 – 100285

Abstract

Read online

The rapid growth of multi-modal data in recent years has driven the strong demand for retrieving semantic-related data within different modalities. Therefore, cross-modal hashing has attracted extensive interest and studies due to its fast retrieval speed and good accuracy. Most of the existing cross-modal hashing models simply apply neural networks to extract the features of the original data, ignoring the unique semantic information attached to each data by the labels. In order to better capture the semantic correlation between different modal data, a novel cross-modal hashing model called deep label feature fusion hashing (DLFFH) is proposed in this article. We can effectively embed semantic label information into data features by building label networks in different modal networks for feature fusion. The fused features can more accurately capture the semantic correlation between data and bridge the semantic gap, thus improving the performance of cross-modal retrieval. In addition, we construct feature label branches and the corresponding feature label loss to ensure that the generated hash codes are discriminative. Extensive experiments have been conducted on three general datasets and the results demonstrate the superiority of the proposed DLFFH which performs better than most cross-modal hashing models.

Keywords