IEEE Access (Jan 2019)

CNN Compression-Recovery Framework via Rank Allocation Decomposition With Knowledge Transfer

  • Zhonghong Ou,
  • Yunfeng Liu,
  • Huihui Kong,
  • Meina Song,
  • Yingxia Shao

DOI
https://doi.org/10.1109/ACCESS.2019.2932773
Journal volume & issue
Vol. 7
pp. 105470 – 105478

Abstract

Read online

Low-rank decomposition is an effective way to decrease the model size of convolutional neural networks (CNNs). Nevertheless, selecting the layer-specific rank is a difficult task, because the layers are not equally redundant. The previous methods are mainly by manual, require expertise, or do not consider the different sensitivity of each layer. This paper proposes a rank allocation decomposition (RAD) method to decompose network by allocating rank for each layer automatically. The idea is to transform the combinatorial optimization problem of rank into a constrained optimal search problem, which can be solved by a greedy algorithm. To recover accuracy of the decomposed network, a novel knowledge transfer based approach is introduced, named SchoolNet. It aligns outputs and intermediate responses from the original (teacher) network to its compressed (student) network while transferring dark knowledge from a strong (headmaster) network with high accuracy to the student network. The experimental results from several advanced models, including AlexNet, VGG-16, and ResNet-50, demonstrate that our scheme can reduce parameters significantly while maintaining a high accuracy level. Specifically, for the VGG-16 on Birds-200 dataset, we achieve 48x compression rate with even 0.13% top-1 accuracy improvement, which outperforms the state-of-the-art remarkably.

Keywords