Advanced Intelligent Systems (Aug 2021)
Direct Gradient Calculation: Simple and Variation‐Tolerant On‐Chip Training Method for Neural Networks
Abstract
On‐chip training of neural networks (NNs) is regarded as a promising training method for neuromorphic systems with analog synaptic devices. Herein, a novel on‐chip training method called direct gradient calculation (DGC) is proposed to substitute conventional backpropagation (BP). In this method, the gradients of a cost function with respect to the weights are calculated directly by sequentially applying a small temporal change to each weight and then measuring the change in cost value. DGC achieves a similar accuracy to that of BP while performing a handwritten digit classification task, validating its training feasibility. In particular, DGC can be applied to analog hardware‐based convolutional NNs (CNNs), which is considered to be a challenging task, enabling appropriate on‐chip training. A hybrid method is also proposed that efficiently combines DGC and BP for training CNNs, and the method achieves a similar accuracy to that of BP and DGC while enhancing the training speed. Furthermore, networks utilizing DGC maintain a higher level of accuracy than those using BP in the presence of variations in hardware (such as synaptic device conductance and neuron circuit component variations) while requiring fewer circuit components.
Keywords