Big Data and Cognitive Computing (Jun 2023)

Is My Pruned Model Trustworthy? PE-Score: A New CAM-Based Evaluation Metric

  • Cesar G. Pachon,
  • Diego Renza,
  • Dora Ballesteros

DOI
https://doi.org/10.3390/bdcc7020111
Journal volume & issue
Vol. 7, no. 2
p. 111

Abstract

Read online

One of the strategies adopted to compress CNN models for image classification tasks is pruning, where some elements, channels or filters of the network are discarded. Typically, pruning methods present results in terms of model performance before and after pruning (assessed by accuracy or a related parameter such as the F1-score), assuming that if the difference is less than a certain value (e.g., 2%), the pruned model is trustworthy. However, state-of-the-art models are not concerned with measuring the actual impact of pruning on the network by evaluating the pixels used by the model to make the decision, or the confidence of the class itself. Consequently, this paper presents a new metric, called the Pruning Efficiency score (PE-score), which allows us to identify whether a pruned model preserves the behavior (i.e., the extracted patterns) of the unpruned model, through visualization and interpretation with CAM-based methods. With the proposed metric, it will be possible to better compare pruning methods for CNN-based image classification models, as well as to verify whether the pruned model is efficient by focusing on the same patterns (pixels) as those of the original model, even if it has reduced the number of parameters and FLOPs.

Keywords