Search Results for author: Nicolas Zucchet

Found 11 papers, 6 papers with code

How do language models learn facts? Dynamics, curricula and hallucinations

no code implementations27 Mar 2025 Nicolas Zucchet, Jörg Bornschein, Stephanie Chan, Andrew Lampinen, Razvan Pascanu, Soham De

Large language models accumulate vast knowledge during pre-training, yet the dynamics governing this acquisition remain poorly understood.

Scheduling

Recurrent neural networks: vanishing and exploding gradients are not the end of the story

1 code implementation31 May 2024 Nicolas Zucchet, Antonio Orvieto

Recurrent neural networks (RNNs) notoriously struggle to learn long-term memories, primarily due to vanishing and exploding gradients.

State Space Models

Uncovering mesa-optimization algorithms in Transformers

no code implementations11 Sep 2023 Johannes von Oswald, Maximilian Schlegel, Alexander Meulemans, Seijin Kobayashi, Eyvind Niklasson, Nicolas Zucchet, Nino Scherrer, Nolan Miller, Mark Sandler, Blaise Agüera y Arcas, Max Vladymyrov, Razvan Pascanu, João Sacramento

Some autoregressive models exhibit in-context learning capabilities: being able to learn as an input sequence is processed, without undergoing any parameter changes, and without being explicitly trained to do so.

In-Context Learning Language Modelling

Gated recurrent neural networks discover attention

no code implementations4 Sep 2023 Nicolas Zucchet, Seijin Kobayashi, Yassir Akram, Johannes von Oswald, Maxime Larcher, Angelika Steger, João Sacramento

In particular, we examine RNNs trained to solve simple in-context learning tasks on which Transformers are known to excel and find that gradient descent instills in our RNNs the same attention-based in-context learning algorithm used by Transformers.

In-Context Learning

Online learning of long-range dependencies

1 code implementation NeurIPS 2023 Nicolas Zucchet, Robert Meier, Simon Schug, Asier Mujika, João Sacramento

Online learning holds the promise of enabling efficient long-term credit assignment in recurrent neural networks.

Random initialisations performing above chance and how to find them

1 code implementation15 Sep 2022 Frederik Benzing, Simon Schug, Robert Meier, Johannes von Oswald, Yassir Akram, Nicolas Zucchet, Laurence Aitchison, Angelika Steger

Neural networks trained with stochastic gradient descent (SGD) starting from different random initialisations typically find functionally very similar solutions, raising the question of whether there are meaningful differences between different SGD solutions.

The least-control principle for local learning at equilibrium

1 code implementation4 Jul 2022 Alexander Meulemans, Nicolas Zucchet, Seijin Kobayashi, Johannes von Oswald, João Sacramento

As special cases, they include models of great current interest in both neuroscience and machine learning, such as deep neural networks, equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.

BIG-bench Machine Learning Meta-Learning

A contrastive rule for meta-learning

1 code implementation4 Apr 2021 Nicolas Zucchet, Simon Schug, Johannes von Oswald, Dominic Zhao, João Sacramento

Humans and other animals are capable of improving their learning performance as they solve related tasks from a given problem domain, to the point of being able to learn from extremely limited data.

Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.