IEEE Access (Jan 2019)

Deep Attention-Guided Hashing

  • Zhan Yang,
  • Osolo Ian Raymond,
  • Wuqing Sun,
  • Jun Long

DOI
https://doi.org/10.1109/ACCESS.2019.2891894
Journal volume & issue
Vol. 7
pp. 11209 – 11221

Abstract

Read online

With the rapid growth of multimedia data (e.g., image, audio, and video) on the Web, the learning-based hashing techniques, such as deep supervised hashing, have proven to be very efficient for large-scale multimedia search. The recent successes seen in the learning-based hashing methods are largely due to the success of the deep learning-based hashing methods. However, there are some limitations to the previous learning-based hashing methods (e.g., the learned hash codes containing repetitive and highly correlated information). In this paper, we propose a novel learning-based hashing method, named deep attention-guided hashing (DAgH). DAgH is implemented using two stream frameworks. The core idea is to use the guided hash codes which are generated by the hashing network of the first stream framework (called the first hashing network) to guide the training of the hashing network of the second stream framework (called the second hashing network). Specifically, in the first network, it leverages an attention network and hashing network to generate the attention-guided hash codes from the original images. The loss function we propose contains two components: the semantic loss and the attention loss. The attention loss is used to punish the attention network to obtain the salient region from pairs of images; in the second network, these attention-guided hash codes are used to guide the training of the second hashing network (i.e., these codes are treated as supervised labels to train the second network). By doing this, DAgH can make full use of the most critical information contained in images to guide the second hashing network in order to learn efficient hash codes in a true end-to-end fashion. Results from our experiments demonstrate that DAgH can generate high-quality hash codes and it outperforms the current state-of-the-art methods on three benchmark datasets: CIFAR-10, NUS-WIDE, and ImageNet.

Keywords