Leibniz Transactions on Embedded Systems (Nov 2022)

HW-Flow: A Multi-Abstraction Level HW-CNN Codesign Pruning Methodology

  • Vemparala, Manoj-Rohit,
  • Fasfous, Nael,
  • Frickenstein, Alexander,
  • Valpreda, Emanuele,
  • Camalleri, Manfredi,
  • Zhao, Qi,
  • Unger, Christian,
  • Nagaraja, Naveen-Shankar,
  • Martina, Maurizio,
  • Stechele, Walter

DOI
https://doi.org/10.4230/LITES.8.1.3
Journal volume & issue
Vol. 8, no. 1
pp. 03:1 – 03:30

Abstract

Read online

Convolutional neural networks (CNNs) have produced unprecedented accuracy for many computer vision problems in the recent past. In power and compute-constrained embedded platforms, deploying modern CNNs can present many challenges. Most CNN architectures do not run in real-time due to the high number of computational operations involved during the inference phase. This emphasizes the role of CNN optimization techniques in early design space exploration. To estimate their efficacy in satisfying the target constraints, existing techniques are either hardware (HW) agnostic, pseudo-HW-aware by considering parameter and operation counts, or HW-aware through inflexible hardware-in-the-loop (HIL) setups. In this work, we introduce HW-Flow, a framework for optimizing and exploring CNN models based on three levels of hardware abstraction: Coarse, Mid and Fine. Through these levels, CNN design and optimization can be iteratively refined towards efficient execution on the target hardware platform. We present HW-Flow in the context of CNN pruning by augmenting a reinforcement learning agent with key metrics to understand the influence of its pruning actions on the inference hardware. With 2× reduction in energy and latency, we prune ResNet56, ResNet50, and DeepLabv3 with minimal accuracy degradation on the CIFAR-10, ImageNet, and CityScapes datasets, respectively.

Keywords