Search Results for author: Erin Grant

Found 17 papers, 7 papers with code

Bayes in the age of intelligent machines

no code implementations16 Nov 2023 Thomas L. Griffiths, Jian-Qiao Zhu, Erin Grant, R. Thomas McCoy

The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference.

Bayesian Inference

The Transient Nature of Emergent In-Context Learning in Transformers

2 code implementations NeurIPS 2023 Aaditya K. Singh, Stephanie C. Y. Chan, Ted Moskovitz, Erin Grant, Andrew M. Saxe, Felix Hill

The transient nature of ICL is observed in transformers across a range of model sizes and datasets, raising the question of how much to "overtrain" transformers when seeking compact, cheaper-to-run models.

Bayesian Inference In-Context Learning +1

Statistical physics, Bayesian inference and neural information processing

no code implementations29 Sep 2023 Erin Grant, Sandra Nestler, Berfin Şimşek, Sara Solla

Lecture notes from the course given by Professor Sara A. Solla at the Les Houches summer school on "Statistical physics of Machine Learning".

Bayesian Inference Dimensionality Reduction

Gaussian Process Surrogate Models for Neural Networks

no code implementations11 Aug 2022 Michael Y. Li, Erin Grant, Thomas L. Griffiths

Not being able to understand and predict the behavior of deep learning systems makes it hard to decide what architecture and algorithm to use for a given problem.

Gaussian Processes

Distinguishing rule- and exemplar-based generalization in learning systems

1 code implementation8 Oct 2021 Ishita Dasgupta, Erin Grant, Thomas L. Griffiths

Machine learning systems often do not share the same inductive biases as humans and, as a result, extrapolate or generalize in ways that are inconsistent with our expectations.

BIG-bench Machine Learning Data Augmentation +2

Passive Attention in Artificial Neural Networks Predicts Human Visual Selectivity

1 code implementation NeurIPS 2021 Thomas A. Langlois, H. Charles Zhao, Erin Grant, Ishita Dasgupta, Thomas L. Griffiths, Nori Jacoby

Similarly, we find that recognition performance in the same ANN models was likewise influenced by masking input images using human visual selectivity maps.

Are Convolutional Neural Networks or Transformers more like human vision?

1 code implementation15 May 2021 Shikhar Tuli, Ishita Dasgupta, Erin Grant, Thomas L. Griffiths

Our focus is on comparing a suite of standard Convolutional Neural Networks (CNNs) and a recently-proposed attention-based network, the Vision Transformer (ViT), which relaxes the translation-invariance constraint of CNNs and therefore represents a model with a weaker set of inductive biases.

BIG-bench Machine Learning Object Recognition

Connecting Context-specific Adaptation in Humans to Meta-learning

no code implementations27 Nov 2020 Rachit Dubey, Erin Grant, Michael Luo, Karthik Narasimhan, Thomas Griffiths

This work connects the context-sensitive nature of cognitive control to a method for meta-learning with context-conditioned adaptation.


Universal linguistic inductive biases via meta-learning

1 code implementation29 Jun 2020 R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, Tal Linzen

To facilitate computational modeling aimed at addressing this question, we introduce a framework for giving particular linguistic inductive biases to a neural network model; such a model can then be used to empirically explore the effects of those inductive biases.

Language Acquisition Meta-Learning

Modulating transfer between tasks in gradient-based meta-learning

no code implementations ICLR 2019 Erin Grant, Ghassen Jerfel, Katherine Heller, Thomas L. Griffiths

Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task.

Inductive Bias Meta-Learning

Exploiting Attention to Reveal Shortcomings in Memory Models

no code implementations WS 2018 Kaylee Burns, Aida Nematzadeh, Erin Grant, Alison Gopnik, Tom Griffiths

The decision making processes of deep networks are difficult to understand and while their accuracy often improves with increased architectural complexity, so too does their opacity.

BIG-bench Machine Learning Decision Making +2

Evaluating Theory of Mind in Question Answering

2 code implementations EMNLP 2018 Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, Thomas L. Griffiths

We propose a new dataset for evaluating question answering models with respect to their capacity to reason about beliefs.

Question Answering

Recasting Gradient-Based Meta-Learning as Hierarchical Bayes

no code implementations ICLR 2018 Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, Thomas Griffiths

Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task.


The Interaction of Memory and Attention in Novel Word Generalization: A Computational Investigation

1 code implementation18 Feb 2016 Erin Grant, Aida Nematzadeh, Suzanne Stevenson

People exhibit a tendency to generalize a novel noun to the basic-level in a hierarchical taxonomy -- a cognitively salient category such as "dog" -- with the degree of generalization depending on the number and type of exemplars.

Cannot find the paper you are looking for? You can Submit a new open access paper.