IET Computer Vision (Aug 2023)

Denseformer: A dense transformer framework for person re‐identification

  • Haoyan Ma,
  • Xiang Li,
  • Xia Yuan,
  • Chunxia Zhao

DOI
https://doi.org/10.1049/cvi2.12118
Journal volume & issue
Vol. 17, no. 5
pp. 527 – 536

Abstract

Read online

Abstract Transformer has shown its effectiveness and advantage in many computer vision tasks, for example, image classification and object re‐identification (ReID). However, existing vision transformers are stacked layer by layer, lacking direct information exchange among every layer. Inspired by DenseNet, we propose a dense transformer framework (termed Denseformer) that connects each layer to every other layer through class tokens. We demonstrate that Denseformer can consistently achieve better performance on person ReID tasks across datasets (Market‐1501, DukeMTMC, MSMT17, and Occluded‐Duke), only at a negligible increase of computation. We show that Denseformer has several compelling advantages: it pays more attention to the main parts of human bodies and obtains discriminative global features.