Applied Sciences (Nov 2022)

A Novel Filter-Level Deep Convolutional Neural Network Pruning Method Based on Deep Reinforcement Learning

  • Yihao Feng,
  • Chao Huang,
  • Long Wang,
  • Xiong Luo,
  • Qingwen Li

DOI
https://doi.org/10.3390/app122211414
Journal volume & issue
Vol. 12, no. 22
p. 11414

Abstract

Read online

Deep neural networks (DNNs) have achieved great success in the field of computer vision. The high requirements for memory and storage by DNNs make it difficult to apply them to mobile or embedded devices. Therefore, compression and structure optimization of deep neural networks have become a hot research topic. To eliminate redundant structures in deep convolutional neural networks (DCNNs), we propose an efficient filter pruning framework via deep reinforcement learning (DRL). The proposed framework is based on a deep deterministic policy gradient (DDPG) algorithm for filter pruning rate optimization. The main features of the proposed framework are as follows: (1) AA tailored reward function considering both accuracy and complexity of DCNN is proposed for the training of DDPG and (2) a novel filter sorting criterion based on Taylor expansion is developed for filter pruning selection. To illustrate the effectiveness of the proposed framework, extensive comparative studies on large public datasets and well-recognized DCNNs are conducted. The experimental results demonstrate that the Taylor-expansion-based filter sorting criterion is much better than the widely used minimum-weight-based criterion. More importantly, the proposed filter pruning framework can achieve over 10× parameter compression and 3× floating point operations (FLOPs) reduction while maintaining similar accuracy to the original network. The performance of the proposed framework is promising compared with state-of-the-art DRL-based filter pruning methods.

Keywords