Sensors (Jun 2022)

Towards Convolutional Neural Network Acceleration and Compression Based on <i>Simon</i><i>k</i>-Means

  • Mingjie Wei,
  • Yunping Zhao,
  • Xiaowen Chen,
  • Chen Li,
  • Jianzhuang Lu

DOI
https://doi.org/10.3390/s22114298
Journal volume & issue
Vol. 22, no. 11
p. 4298

Abstract

Read online

Convolutional Neural Networks (CNNs) are popular models that are widely used in image classification, target recognition, and other fields. Model compression is a common step in transplanting neural networks into embedded devices, and it is often used in the retraining stage. However, it requires a high expenditure of time by retraining weight data to atone for the loss of precision. Unlike in prior designs, we propose a novel model compression approach based on Simonk-means, which is specifically designed to support a hardware acceleration scheme. First, we propose an extension algorithm named Simonk-means based on simple k-means. We use Simonk-means to cluster trained weights in convolutional layers and fully connected layers. Second, we reduce the consumption of hardware resources in data movement and storage by using a data storage and index approach. Finally, we provide the hardware implementation of the compressed CNN accelerator. Our evaluations on several classifications show that our design can achieve 5.27× compression and reduce 74.3% of the multiply–accumulate (MAC) operations in AlexNet on the FASHION-MNIST dataset.

Keywords