Applied Sciences (Oct 2021)

Exploring the Knowledge Embedded in Class Visualizations and Their Application in Dataset and Extreme Model Compression

  • José Ricardo Abreu-Pederzini,
  • Guillermo Arturo Martínez-Mascorro,
  • José Carlos Ortíz-Bayliss,
  • Hugo Terashima-Marín

DOI
https://doi.org/10.3390/app11209374
Journal volume & issue
Vol. 11, no. 20
p. 9374

Abstract

Read online

Artificial neural networks are efficient learning algorithms that are considered to be universal approximators for solving numerous real-world problems in areas such as computer vision, language processing, or reinforcement learning. To approximate any given function, neural networks train a large number of parameters—up to millions, or even billions in some cases. The large number of parameters and hidden layers in neural networks make them hard to interpret, which is why they are often referred to as black boxes. In the quest to make artificial neural networks interpretable in the field of computer vision, feature visualization stands out as one of the most developed and promising research directions. While feature visualizations are a valuable tool to gain insights about the underlying function learned by the network, they are still considered to be simple visual aids requiring human interpretation. In this paper, we propose that feature visualizations—class visualizations in particular—are analogous to mental imagery in humans, resembling the experience of seeing or perceiving the actual training data. Therefore, we propose that class visualizations contain embedded knowledge that can be exploited in a more automated manner. We present a series of experiments that shed light on the nature of class visualizations and demonstrate that class visualizations can be considered a conceptual compression of the data used to train the underlying model. Finally, we show that class visualizations can be regarded as convolutional filters and experimentally show their potential for extreme model compression purposes.

Keywords