Frontiers in Big Data (Aug 2021)

Structural Compression of Convolutional Neural Networks with Applications in Interpretability

  • Reza Abbasi-Asl,
  • Reza Abbasi-Asl,
  • Reza Abbasi-Asl,
  • Bin Yu,
  • Bin Yu

DOI
https://doi.org/10.3389/fdata.2021.704182
Journal volume & issue
Vol. 4

Abstract

Read online

Deep convolutional neural networks (CNNs) have been successful in many tasks in machine vision, however, millions of weights in the form of thousands of convolutional filters in CNNs make them difficult for human interpretation or understanding in science. In this article, we introduce a greedy structural compression scheme to obtain smaller and more interpretable CNNs, while achieving close to original accuracy. The compression is based on pruning filters with the least contribution to the classification accuracy or the lowest Classification Accuracy Reduction (CAR) importance index. We demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities such as color filters. These compressed networks are easier to interpret because they retain the filter diversity of uncompressed networks with an order of magnitude fewer filters. Finally, a variant of CAR is introduced to quantify the importance of each image category to each CNN filter. Specifically, the most and the least important class labels are shown to be meaningful interpretations of each filter.

Keywords