IEEE Access (Jan 2019)
Learning Sparse Convolutional Neural Network via Quantization With Low Rank Regularization
Abstract
With the refinement of tasks in artificial intelligence, bringing in exponential level increments in computation cost and storage. Therefore, the augment of computation resource for complicated neural networks severely hinders their applications on limited-power devices in recent years. As a result, there is an impending necessity to compress and accelerate the deep networks by special ways. Considering the different peculiarities of weight quantization and sparse regularization, in this paper, we propose a low rank sparse quantization (LRSQ) method to quantize network weights and regularize the corresponding structures at the same time. Our LRSQ can: 1) obtain low-bit quantized networks to reduce memory and computation cost and 2) learn a compact structure from complex convolutional networks for subsequent channel pruning which has significant reduction on FLOPs. In experimental sections, we evaluate the proposed method on several popular models such as VGG-7/16/19 and ResNet-18/34/50, and results show that this method can dramatically reduce parameters and channels of the network with slight inference accuracy loss. Furthermore, we also visualize and analyze the four-dimensional weight tensors, which shows the low rank and group-sparsity structure of it. Finally, we try pruning unimportant channels which are zero-channels in our quantized model, and finding even a little better precision than the standard full-precision network.
Keywords