Nature Communications (Nov 2016)
Random synaptic feedback weights support error backpropagation for deep learning
Abstract
Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weights.