Nature Communications (Nov 2016)

Random synaptic feedback weights support error backpropagation for deep learning

  • Timothy P. Lillicrap,
  • Daniel Cownden,
  • Douglas B. Tweed,
  • Colin J. Akerman

DOI
https://doi.org/10.1038/ncomms13276
Journal volume & issue
Vol. 7, no. 1
pp. 1 – 10

Abstract

Read online

Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weights.