no code implementations • 24 Oct 2024 • Alexander Meulemans, Seijin Kobayashi, Johannes von Oswald, Nino Scherrer, Eric Elmoznino, Blake Richards, Guillaume Lajoie, Blaise Agüera y Arcas, João Sacramento
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
no code implementations • 17 Jul 2024 • Seijin Kobayashi, Simon Schug, Yassir Akram, Florian Redhardt, Johannes von Oswald, Razvan Pascanu, Guillaume Lajoie, João Sacramento
Under what circumstances can transformers compositionally generalize from a subset of tasks to all possible combinations of tasks that share similar components?
no code implementations • 12 Jun 2024 • Maciej Pióro, Maciej Wołczyk, Razvan Pascanu, Johannes von Oswald, João Sacramento
A new breed of gated-linear recurrent neural networks has reached state-of-the-art performance on a range of sequence modeling problems.
1 code implementation • 9 Jun 2024 • Simon Schug, Seijin Kobayashi, Yassir Akram, João Sacramento, Razvan Pascanu
To further examine the hypothesis that the intrinsic hypernetwork of multi-head attention supports compositional generalization, we ablate whether making the hypernetwork generated linear value network nonlinear strengthens compositionality.
1 code implementation • 22 Dec 2023 • Simon Schug, Seijin Kobayashi, Yassir Akram, Maciej Wołczyk, Alexandra Proca, Johannes von Oswald, Razvan Pascanu, João Sacramento, Angelika Steger
This allows us to relate the problem of compositional generalization to that of identification of the underlying modules.
no code implementations • 11 Sep 2023 • Johannes von Oswald, Maximilian Schlegel, Alexander Meulemans, Seijin Kobayashi, Eyvind Niklasson, Nicolas Zucchet, Nino Scherrer, Nolan Miller, Mark Sandler, Blaise Agüera y Arcas, Max Vladymyrov, Razvan Pascanu, João Sacramento
Some autoregressive models exhibit in-context learning capabilities: being able to learn as an input sequence is processed, without undergoing any parameter changes, and without being explicitly trained to do so.
no code implementations • 4 Sep 2023 • Nicolas Zucchet, Seijin Kobayashi, Yassir Akram, Johannes von Oswald, Maxime Larcher, Angelika Steger, João Sacramento
In particular, we examine RNNs trained to solve simple in-context learning tasks on which Transformers are known to excel and find that gradient descent instills in our RNNs the same attention-based in-context learning algorithm used by Transformers.
1 code implementation • NeurIPS 2023 • Nicolas Zucchet, Robert Meier, Simon Schug, Asier Mujika, João Sacramento
Online learning holds the promise of enabling efficient long-term credit assignment in recurrent neural networks.
1 code implementation • 15 Dec 2022 • Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, Max Vladymyrov
We start by providing a simple weight construction that shows the equivalence of data transformations induced by 1) a single linear self-attention layer and by 2) gradient-descent (GD) on a regression loss.
1 code implementation • 4 Jul 2022 • Alexander Meulemans, Nicolas Zucchet, Seijin Kobayashi, Johannes von Oswald, João Sacramento
As special cases, they include models of great current interest in both neuroscience and machine learning, such as deep neural networks, equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
no code implementations • 6 May 2022 • Nicolas Zucchet, João Sacramento
This paper reviews gradient-based techniques to solve bilevel optimization problems.
2 code implementations • 14 Apr 2022 • Alexander Meulemans, Matilde Tristany Farinha, Maria R. Cervera, João Sacramento, Benjamin F. Grewe
Building upon deep feedback control (DFC), a recently proposed credit assignment method, we combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
1 code implementation • NeurIPS 2021 • Johannes von Oswald, Dominic Zhao, Seijin Kobayashi, Simon Schug, Massimo Caccia, Nicolas Zucchet, João Sacramento
We find that patterned sparsity emerges from this process, with the pattern of sparsity varying on a problem-by-problem basis.
3 code implementations • NeurIPS 2021 • Alexander Meulemans, Matilde Tristany Farinha, Javier García Ordóñez, Pau Vilimelis Aceituno, João Sacramento, Benjamin F. Grewe
The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output.
no code implementations • 27 Apr 2021 • Jakob Jordan, João Sacramento, Willem A. M. Wybo, Mihai A. Petrovici, Walter Senn
We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration.
1 code implementation • 4 Apr 2021 • Nicolas Zucchet, Simon Schug, Johannes von Oswald, Dominic Zhao, João Sacramento
Humans and other animals are capable of improving their learning performance as they solve related tasks from a given problem domain, to the point of being able to learn from extremely limited data.
3 code implementations • NeurIPS 2021 • Christian Henning, Maria R. Cervera, Francesco D'Angelo, Johannes von Oswald, Regina Traber, Benjamin Ehret, Seijin Kobayashi, Benjamin F. Grewe, João Sacramento
We offer a practical deep learning implementation of our framework based on probabilistic task-conditioned hypernetworks, an approach we term posterior meta-replay.
2 code implementations • ICLR 2021 • Johannes von Oswald, Seijin Kobayashi, Alexander Meulemans, Christian Henning, Benjamin F. Grewe, João Sacramento
The largely successful method of training neural networks is to learn their weights using some variant of stochastic gradient descent (SGD).
Ranked #70 on
Image Classification
on CIFAR-100
(using extra training data)
2 code implementations • NeurIPS 2020 • Alexander Meulemans, Francesco S. Carzaniga, Johan A. K. Suykens, João Sacramento, Benjamin F. Grewe
Here, we analyze target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization.
8 code implementations • ICLR 2020 • Johannes von Oswald, Christian Henning, Benjamin F. Grewe, João Sacramento
Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks.
Ranked #4 on
Continual Learning
on F-CelebA (10 tasks)
no code implementations • NeurIPS 2018 • João Sacramento, Rui Ponte Costa, Yoshua Bengio, Walter Senn
Deep learning has seen remarkable developments over the last years, many of them inspired by neuroscience.
1 code implementation • 30 Dec 2017 • João Sacramento, Rui Ponte Costa, Yoshua Bengio, Walter Senn
Animal behaviour depends on learning to associate sensory stimuli with the desired motor command.