Mathematics (Nov 2023)

Model Compression Algorithm via Reinforcement Learning and Knowledge Distillation

  • Botao Liu,
  • Bing-Bing Hu,
  • Ming Zhao,
  • Sheng-Lung Peng,
  • Jou-Ming Chang

DOI
https://doi.org/10.3390/math11224589
Journal volume & issue
Vol. 11, no. 22
p. 4589

Abstract

Read online

Traditional model compression techniques are dependent on handcrafted features and require domain experts, with a tradeoff between model size, speed, and accuracy. This study proposes a new approach toward resolving model compression problems. Our approach combines reinforcement-learning-based automated pruning and knowledge distillation to improve the pruning of unimportant network layers and the efficiency of the compression process. We introduce a new state quantity that controls the size of the reward and an attention mechanism that reinforces useful features and attenuates useless features to enhance the effects of other features. The experimental results show that the proposed model is superior to other advanced pruning methods in terms of the computation time and accuracy on CIFAR-100 and ImageNet dataset, where the accuracy is approximately 3% higher than that of similar methods with shorter computation times.

Keywords