IEEE Access (Jan 2019)
Stochastic Computing for Hardware Implementation of Binarized Neural Networks
Abstract
Binarized neural networks, a recently discovered class of neural networks with minimal memory requirements and no reliance on multiplication, are a fantastic opportunity for the realization of compact and energy efficient inference hardware. However, such neural networks are generally not entirely binarized: their first layer remains with fixed point input. In this paper, we propose a stochastic computing version of binarized neural networks, where the input is also binarized. The simulations on the example of the Fashion-MNIST and CIFAR-10 datasets show that such networks can approach the performance of conventional binarized neural networks. We evidence that the training procedure should be adapted for use with stochastic computing. Finally, the ASIC implementation of our scheme is investigated, in a system that closely associates logic and memory, implemented by spin torque magnetoresistive random access memory. This analysis shows that the stochastic computing approach can allow considerable savings with regards to conventional binarized neural networks in terms of area (62% area reduction on the Fashion-MNIST task). It can also allow important savings in terms of energy consumption if we accept a reasonable reduction of accuracy: for example a factor 2.1 can be saved, with the cost of 1.4% in Fashion-MNIST test accuracy. These results highlight the high potential of binarized neural networks for hardware implementation, and that adapting them to hardware constraints can provide important benefits.
Keywords