Backpropagation is one of the central components of modern deep learning. However, it’s not biologically plausible, which limits the applicability of deep learning to understand how the human brain works. Direct Feedback Alignment is a biologically plausible alternative and this paper shows that, contrary to previous research, it can be successfully applied to modern deep architectures and solve challenging tasks.
OUTLINE:
0:00 – Intro & Overview
1:40 – The Problem with Backpropagation
10:25 – Direct Feedback Alignment
21:00 – My Intuition why DFA works
31:20 – Experiments
Paper:
Code:
Referenced Paper by Arild Nøkland:
Abstract:
Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural language processing. In contrast with previous studies limited to computer vision tasks, our findings show that it successfully trains a large range of state-of-the-art deep learning architectures, with performance close to fine-tuned backpropagation. At variance with common beliefs, our work supports that challenging tasks can be tackled in the absence of weight transport.
Authors: Julien Launay, Iacopo Poli, François Boniface, Florent Krzakala
Links:
YouTube:
Twitter:
Discord:
BitChute:
Minds:
source