Search Results for author: Jay McClelland

Found 2 papers, 1 papers with code

Data Distributional Properties Drive Emergent In-Context Learning in Transformers

4 code implementations22 Apr 2022 Stephanie C. Y. Chan, Adam Santoro, Andrew K. Lampinen, Jane X. Wang, Aaditya Singh, Pierre H. Richemond, Jay McClelland, Felix Hill

In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models.

Few-Shot Learning In-Context Learning

Continual Learning using the SHDL Framework with Skewed Replay Distributions

no code implementations25 Sep 2019 Amarjot Singh, Jay McClelland

Continuous learning has been a long-standing challenge for neural networks as the repeated acquisition of information from non-uniform data distributions generally lead to catastrophic forgetting or interference.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.