IEEE Access (Jan 2024)

Channel Pruning of Transfer Learning Models Using Novel Techniques

  • Pragnesh Thaker,
  • Biju R. Mohan

DOI
https://doi.org/10.1109/ACCESS.2024.3416997
Journal volume & issue
Vol. 12
pp. 94914 – 94925

Abstract

Read online

This research paper delves into the challenges associated with deep learning models, specifically focusing on transfer learning. Despite the effectiveness of widely used models such as VGGNet, ResNet, and GoogLeNet, their deployment on resource-constrained devices is impeded by high memory bandwidth and computational costs, and to overcome these limitations, the study proposes pruning as a viable solution. Numerous parameters, particularly in fully connected layers, contribute minimally to computational costs, so we focus on convolution layers’ pruning. The research explores and evaluates three innovative pruning methods: the Max3 Saliency pruning method, the K-Means clustering algorithm, and the Singular Value Decomposition (SVD) approach. The Max3 Saliency pruning method introduces a slight variation by using the three maximum values of the kernel instead of all nine to compute the saliency score. This method is the most effective, substantially reducing parameter and Floating Point Operations (FLOPs) for both VGG16 and ResNet56 models. Notably, VGG16 demonstrates a remarkable 46.19% reduction in parameters and a 61.91% reduction in FLOPs. Using the Max3 Saliency pruning method, ResNet56 shows a 35.15% reduction in parameters and FLOPs. The K-Means pruning algorithm is also successful, resulting in a 40.00% reduction in parameters for VGG16 and a 49.20% reduction in FLOPs. In the case of ResNet56, the K-Means algorithm achieved a 31.01% reduction in both parameters and FLOPs. While the Singular Value Decomposition (SVD) approach provides a new set of values for condensed channels, its overall pruning ratio is smaller than the Max3 Saliency and K-Means methods. The SVD pruning method prunes 20.07% parameter reduction and a 24.64% reduction in FLOPs achieved for VGG16, along with a 16.94% reduction in both FLOPs and parameters for ResNet56. Compared with the state-of-the-art methods, the Max3 Saliency and K-Means pruning methods performed better in Flops reduction metrics.

Keywords