IEEE Access (Jan 2024)
Direct Feedback Learning With Local Alignment Support
Abstract
While backpropagation (BP) algorithm has been pivotal in enabling the success of modern deep learning technologies, it encounters challenges related to computational inefficiency and biological implausibility. Especially, the sequential propagation of error signals using forward weights in BP is not biologically plausible and prevents efficient parallel updates of learning parameters. To solve these problems, the direct feedback alignment (DFA) method is proposed to directly propagate the error signal from output layer to each hidden layer through random feedback weight, but the performance of DFA is still not competent to BP, especially in complicate tasks with large number of outputs and the convolutional neural network models. In this paper, we propose a method to adjust the feedback weights in DFA using additional local modules that are connected to the hidden layers. The local module attached to each hidden layer has a single-layer structure and learns to mimic the final output of the network. Then, the weights of a local module behave like a direct path connecting each hidden layer to the network output, which has an inverse relationship to the direct feedback weights of DFA. We use this relationship to update the feedback weight of DFA. From the experimental investigation, we confirm that the proposed adaptive feedback weights improve the alignment of the error signal of DFA with that of BP. Furthermore, comparative experiments show that the proposed method significantly outperforms the original DFA on well-known benchmark datasets. The code used for the experiments is available at https://github.com/leibniz21c/direct-feedback-learning-with-local-alignment-support.
Keywords