Frontiers in Artificial Intelligence (Oct 2024)

LRMP: Layer Replication with Mixed Precision for spatial in-memory DNN accelerators

  • Abinand Nallathambi,
  • Christin David Bose,
  • Wilfried Haensch,
  • Anand Raghunathan

DOI
https://doi.org/10.3389/frai.2024.1268317
Journal volume & issue
Vol. 7

Abstract

Read online

In-memory computing (IMC) with non-volatile memories (NVMs) has emerged as a promising approach to address the rapidly growing computational demands of Deep Neural Networks (DNNs). Mapping DNN layers spatially onto NVM-based IMC accelerators achieves high degrees of parallelism. However, two challenges that arise in this approach are the highly non-uniform distribution of layer processing times and high area requirements. We propose LRMP, a method to jointly apply layer replication and mixed precision quantization to improve the performance of DNNs when mapped to area-constrained IMC accelerators. LRMP uses a combination of reinforcement learning and mixed integer linear programming to search the replication-quantization design space using a model that is closely informed by the target hardware architecture. Across five DNN benchmarks, LRMP achieves 2.6–9.3× latency and 8–18× throughput improvement at minimal (<1%) degradation in accuracy.

Keywords