1 code implementation • 29 Nov 2023 • Drew A. Hudson, Daniel Zoran, Mateusz Malinowski, Andrew K. Lampinen, Andrew Jaegle, James L. McClelland, Loic Matthey, Felix Hill, Alexander Lerchner
We introduce SODA, a self-supervised diffusion model, designed for representation learning.
1 code implementation • 23 Oct 2023 • Yutaro Yamada, Yihan Bao, Andrew K. Lampinen, Jungo Kasai, Ilker Yildirim
Large language models (LLMs) show remarkable capabilities across a variety of tasks.
no code implementations • 18 Oct 2023 • Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas P. O'Connell, Thomas Unterthiner, Andrew K. Lampinen, Klaus-Robert Müller, Mariya Toneva, Thomas L. Griffiths
Finally, we lay out open problems in representational alignment where progress can benefit all three of these fields.
no code implementations • 11 Oct 2022 • Stephanie C. Y. Chan, Ishita Dasgupta, Junkyung Kim, Dharshan Kumaran, Andrew K. Lampinen, Felix Hill
In transformers trained on controlled stimuli, we find that generalization from weights is more rule-based whereas generalization from context is largely exemplar-based.
no code implementations • 14 Jul 2022 • Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Hannah R. Sheahan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, Felix Hill
We evaluate state of the art large language models, as well as humans, and find that the language models reflect many of the same patterns observed in humans across these tasks $\unicode{x2014}$ like humans, models answer more accurately when the semantic content of a task supports the logical inferences.
no code implementations • 16 Jun 2022 • Aaditya K. Singh, David Ding, Andrew Saxe, Felix Hill, Andrew K. Lampinen
Through controlled experiments, we show that training a speaker with two listeners that perceive differently, using our method, allows the speaker to adapt to the idiosyncracies of the listeners.
2 code implementations • 22 Apr 2022 • Stephanie C. Y. Chan, Adam Santoro, Andrew K. Lampinen, Jane X. Wang, Aaditya Singh, Pierre H. Richemond, Jay McClelland, Felix Hill
In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models.
no code implementations • 8 Apr 2022 • Allison C. Tam, Neil C. Rabinowitz, Andrew K. Lampinen, Nicholas A. Roy, Stephanie C. Y. Chan, DJ Strouse, Jane X. Wang, Andrea Banino, Felix Hill
We show that these pretrained representations drive meaningful, task-relevant exploration and improve performance on 3D simulated environments.
no code implementations • 5 Apr 2022 • Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill
In summary, explanations can support the in-context learning of large LMs on challenging tasks.
1 code implementation • 15 Mar 2022 • Stephanie C. Y. Chan, Andrew K. Lampinen, Pierre H. Richemond, Felix Hill
As humans and animals learn in the natural world, they encounter distributions of entities, situations and events that are far from uniform.
1 code implementation • 7 Dec 2021 • Andrew K. Lampinen, Nicholas A. Roy, Ishita Dasgupta, Stephanie C. Y. Chan, Allison C. Tam, James L. McClelland, Chen Yan, Adam Santoro, Neil C. Rabinowitz, Jane X. Wang, Felix Hill
Inferring the abstract relational and causal structure of the world is a major challenge for reinforcement-learning (RL) agents.
no code implementations • NeurIPS 2020 • Katherine L. Hermann, Andrew K. Lampinen
Answers to these questions are important for understanding the basis of models' decisions, as well as for building models that learn versatile, adaptable representations useful beyond the original training task.
3 code implementations • 8 May 2020 • Andrew K. Lampinen, James L. McClelland
We demonstrate the effectiveness of this framework across a wide variety of tasks and computational paradigms, ranging from regression to image classification and reinforcement learning.
no code implementations • 27 Sep 2019 • Sebastien Racaniere, Andrew K. Lampinen, Adam Santoro, David P. Reichert, Vlad Firoiu, Timothy P. Lillicrap
We demonstrate the success of our approach in rich but sparsely rewarding 2D and 3D environments, where an agent is tasked to achieve a single goal selected from a set of possible goals that varies between episodes, and identify challenges for future work.
2 code implementations • 23 May 2019 • Andrew K. Lampinen, James L. McClelland
How can deep learning systems flexibly reuse their knowledge?
no code implementations • ICLR 2019 • Andrew K. Lampinen, Surya Ganguli
However we lack analytic theories that can quantitatively predict how the degree of knowledge transfer depends on the relationship between the tasks.
no code implementations • 27 Oct 2017 • Andrew K. Lampinen, James L. McClelland
Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily.