IET Computer Vision (Jun 2024)

Deep network with double reuses and convolutional shortcuts

  • Qian Liu,
  • Cunbao Wang

DOI
https://doi.org/10.1049/cvi2.12260
Journal volume & issue
Vol. 18, no. 4
pp. 512 – 525

Abstract

Read online

Abstract The authors design a novel convolutional network architecture, that is, deep network with double reuses and convolutional shortcuts, in which new compressed reuse units are presented. Compressed reuse units combine the reused features from the first 3 × 3 convolutional layer and the features from the last 3 × 3 convolutional layer to produce new feature maps in the current compressed reuse unit, simultaneously reuse the feature maps from all previous compressed reuse units to generate a shortcut by an 1 × 1 convolution, and then concatenate these new maps and this shortcut as the input to next compressed reuse unit. Deep network with double reuses and convolutional shortcuts uses the feature reuse concatenation from all compressed reuse units as the final features for classification. In deep network with double reuses and convolutional shortcuts, the inner‐ and outer‐unit feature reuses and the convolutional shortcut compressed from the previous outer‐unit feature reuses can alleviate the vanishing‐gradient problem by strengthening the forward feature propagation inside and outside the units, improve the effectiveness of features and reduce calculation cost. Experimental results on CIFAR‐10, CIFAR‐100, ImageNet ILSVRC 2012, Pascal VOC2007 and MS COCO benchmark databases demonstrate the effectiveness of authors’ architecture for object recognition and detection, as compared with the state‐of‐the‐art.

Keywords