IEEE Access (Jan 2025)
Tailored Channel Pruning: Achieve Targeted Model Complexity Through Adaptive Sparsity Regularization
Abstract
In deep learning, the size and complexity of neural networks have been rapidly increased to achieve higher performance. However, this poses a challenge when utilized in resource-limited environments, such as mobile devices, particularly when trying to preserve the network’s performance. To address this problem, structured pruning has been widely studied as it effectively reduces the network with little impact on performance. To enhance a model’s performance with limited resources, it is crucial to 1) utilize all available resources and 2) maximize performance within these limitations. However, existing pruning methods often require iterations of training and pruning or many experiments to find hyperparameters that satisfy a given budget or forcibly truncate parameters with a given budget, resulting in performance loss. To solve this problem, we propose a novel channel pruning method called Tailored Channel Pruning. Given a target budget (e.g., FLOPs and parameters), our method outputs a tailored network that automatically takes the budget into account during training and satisfies the target budget. During the integrated training and pruning process, our method adaptively controls sparsity regularization and selects important weights that can help maximize the accuracy within the target budget. Through various experiments on the CIFAR-10 and ImageNet datasets, we demonstrate the effectiveness of the proposed method and achieve state-of-the-art accuracy after pruning.
Keywords