Frontiers in Neuroscience (Jul 2019)

RAPA-ConvNets: Modified Convolutional Networks for Accelerated Training on Architectures With Analog Arrays

  • Malte J. Rasch,
  • Tayfun Gokmen,
  • Mattia Rigotti,
  • Wilfried Haensch

DOI
https://doi.org/10.3389/fnins.2019.00753
Journal volume & issue
Vol. 13

Abstract

Read online

Analog arrays are a promising emerging hardware technology with the potential to drastically speed up deep learning. Their main advantage is that they employ analog circuitry to compute matrix-vector products in constant time, irrespective of the size of the matrix. However, ConvNets map very unfavorably onto analog arrays when done in a straight-forward manner, because kernel matrices are typically small and the constant time operation needs to be sequentially iterated a large number of times. Here, we propose to parallelize the training by replicating the kernel matrix of a convolution layer on distinct analog arrays, and randomly divide parts of the compute among them. With this modification, analog arrays execute ConvNets with a large acceleration factor that is proportional to the number of kernel matrices used per layer (here tested 16-1024). Despite having more free parameters, we show analytically and in numerical experiments that this new convolution architecture is self-regularizing and implicitly learns similar filters across arrays. We also report superior performance on a number of datasets and increased robustness to adversarial attacks. Our investigation suggests to revise the notion that emerging hardware architectures that feature analog arrays for fast matrix-vector multiplication are not suitable for ConvNets.

Keywords