As special cases, they include models of great current interest in both neuroscience and machine learning, such as deep neural networks, equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Building upon deep feedback control (DFC), a recently proposed credit assignment method, we combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We find that patterned sparsity emerges from this process, with the pattern of sparsity varying on a problem-by-problem basis.
The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output.
The biophysics of the membrane combines these opinions by taking account their reliabilities, and the soma thus acts as a decision maker.
Humans and other animals are capable of improving their learning performance as they solve related tasks from a given problem domain, to the point of being able to learn from extremely limited data.
We offer a practical deep learning implementation of our framework based on probabilistic task-conditioned hypernetworks, an approach we term posterior meta-replay.
The largely successful method of training neural networks is to learn their weights using some variant of stochastic gradient descent (SGD).
Ranked #66 on Image Classification on CIFAR-100
Here, we analyze target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization.
Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks.
Ranked #4 on Continual Learning on F-CelebA (10 tasks)