Search Results for author: Thomas L. Griffiths

Found 123 papers, 31 papers with code

Improving machine classification using human uncertainty measurements

no code implementations ICLR 2019 Ruairidh M. Battleday, Joshua C. Peterson, Thomas L. Griffiths

As deep CNN classifier performance using ground-truth labels has begun to asymptote at near-perfect levels, a key aim for the field is to extend training paradigms to capture further useful structure in natural image data and improve model robustness and generalization.

Classification Data Augmentation +1

Embodied LLM Agents Learn to Cooperate in Organized Teams

no code implementations19 Mar 2024 Xudong Guo, Kaixuan Huang, Jiale Liu, Wenhui Fan, Natalia Vélez, Qingyun Wu, Huazheng Wang, Thomas L. Griffiths, Mengdi Wang

Large Language Models (LLMs) have emerged as integral tools for reasoning, planning, and decision-making, drawing upon their extensive world knowledge and proficiency in language-related tasks.

Decision Making World Knowledge

Program-Based Strategy Induction for Reinforcement Learning

no code implementations26 Feb 2024 Carlos G. Correa, Thomas L. Griffiths, Nathaniel D. Daw

Typical models of learning assume incremental estimation of continuously-varying decision variables like expected rewards.

Incremental Learning Program induction +1

How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?

no code implementations11 Feb 2024 Ryan Liu, Theodore R. Sumers, Ishita Dasgupta, Thomas L. Griffiths

In day-to-day communication, people often approximate the truth - for example, rounding the time or omitting details - in order to be maximally helpful to the listener.

Navigate

A Rational Analysis of the Speech-to-Song Illusion

no code implementations10 Feb 2024 Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Harin Lee, Thomas L. Griffiths, Nori Jacoby

Here we provide a formal account of this phenomenon, by recasting it as a statistical inference whereby a rational agent attempts to decide whether a sequence of utterances is more likely to have been produced in a song or speech.

Sentence

Human-Like Geometric Abstraction in Large Pre-trained Neural Networks

no code implementations6 Feb 2024 Declan Campbell, Sreejan Kumar, Tyler Giallanza, Thomas L. Griffiths, Jonathan D. Cohen

Humans possess a remarkable capacity to recognize and manipulate abstract structure, which is especially apparent in the domain of geometry.

Measuring Implicit Bias in Explicitly Unbiased Large Language Models

no code implementations6 Feb 2024 Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Thomas L. Griffiths

Large language models (LLMs) can pass explicit bias tests but still harbor implicit biases, similar to humans who endorse egalitarian beliefs yet exhibit subtle biases.

Decision Making

Comparing Abstraction in Humans and Large Language Models Using Multimodal Serial Reproduction

no code implementations6 Feb 2024 Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden Lake, Thomas L. Griffiths

To investigate the effect language on the formation of abstractions, we implement a novel multimodal serial reproduction framework by asking people who receive a visual stimulus to reproduce it in a linguistic format, and vice versa.

Incoherent Probability Judgments in Large Language Models

no code implementations30 Jan 2024 Jian-Qiao Zhu, Thomas L. Griffiths

Autoregressive Large Language Models (LLMs) trained for next-word prediction have demonstrated remarkable proficiency at producing coherent text.

Bayesian Inference

Recovering Mental Representations from Large Language Models with Markov Chain Monte Carlo

no code implementations30 Jan 2024 Jian-Qiao Zhu, Haijiang Yan, Thomas L. Griffiths

Simulating sampling algorithms with people has proven a useful method for efficiently probing and understanding their mental representations.

Bayesian Inference

Concept Alignment

no code implementations9 Jan 2024 Sunayana Rane, Polyphony J. Bruna, Ilia Sucholutsky, Christopher Kello, Thomas L. Griffiths

Discussion of AI alignment (alignment between humans and AI systems) has focused on value alignment, broadly referring to creating AI systems that share human values.

Concept Alignment Philosophy

Learning Human-like Representations to Enable Learning Human Values

no code implementations21 Dec 2023 Andrea Wynn, Ilia Sucholutsky, Thomas L. Griffiths

We propose that this kind of representational alignment between machine learning (ML) models and humans can also support value alignment, allowing ML systems to conform to human values and societal norms.

Ethics Few-Shot Learning +1

Deep de Finetti: Recovering Topic Distributions from Large Language Models

no code implementations21 Dec 2023 Liyi Zhang, R. Thomas McCoy, Theodore R. Sumers, Jian-Qiao Zhu, Thomas L. Griffiths

Large language models (LLMs) can produce long, coherent passages of text, suggesting that LLMs, although trained on next-word prediction, must represent the latent structure that characterizes a document.

Bayesian Inference

Exploring the hierarchical structure of human plans via program generation

2 code implementations30 Nov 2023 Carlos G. Correa, Sophia Sanborn, Mark K. Ho, Frederick Callaway, Nathaniel D. Daw, Thomas L. Griffiths

We find that humans are sensitive to both metrics, but that both accounts fail to predict a qualitative feature of human-created programs, namely that people prefer programs with reuse over and above the predictions of MDL.

Implicit Maximum a Posteriori Filtering via Adaptive Optimization

1 code implementation17 Nov 2023 Gianluca M. Bencomo, Jake C. Snell, Thomas L. Griffiths

Bayesian filtering approximates the true underlying behavior of a time-varying system by inverting an explicit generative model to convert noisy measurements into state estimates.

Bayes in the age of intelligent machines

no code implementations16 Nov 2023 Thomas L. Griffiths, Jian-Qiao Zhu, Erin Grant, R. Thomas McCoy

The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference.

Bayesian Inference

Concept Alignment as a Prerequisite for Value Alignment

no code implementations30 Oct 2023 Sunayana Rane, Mark Ho, Ilia Sucholutsky, Thomas L. Griffiths

Value alignment is essential for building AI systems that can safely and reliably interact with people.

Concept Alignment

Structurally guided task decomposition in spatial navigation tasks

no code implementations3 Oct 2023 Ruiqi He, Carlos G. Correa, Thomas L. Griffiths, Mark K. Ho

How are people able to plan so efficiently despite limited cognitive resources?

Dimensions of Disagreement: Unpacking Divergence and Misalignment in Cognitive Science and Artificial Intelligence

no code implementations3 Oct 2023 Kerem Oktar, Ilia Sucholutsky, Tania Lombrozo, Thomas L. Griffiths

The increasing prevalence of artificial agents creates a correspondingly increasing need to manage disagreements between humans and artificial agents, as well as between artificial agents themselves.

Relational Constraints On Neural Networks Reproduce Human Biases towards Abstract Geometric Regularity

no code implementations29 Sep 2023 Declan Campbell, Sreejan Kumar, Tyler Giallanza, Jonathan D. Cohen, Thomas L. Griffiths

Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors.

Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve

no code implementations24 Sep 2023 R. Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, Thomas L. Griffiths

This approach - which we call the teleological approach - leads us to identify three factors that we hypothesize will influence LLM accuracy: the probability of the task to be performed, the probability of the target output, and the probability of the provided input.

Cognitive Architectures for Language Agents

2 code implementations5 Sep 2023 Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths

Recent efforts have augmented large language models (LLMs) with external resources (e. g., the Internet) or internal control flows (e. g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents.

Decision Making

The Universal Law of Generalization Holds for Naturalistic Stimuli

no code implementations14 Jun 2023 Raja Marjieh, Nori Jacoby, Joshua C. Peterson, Thomas L. Griffiths

Shepard's universal law of generalization is a remarkable hypothesis about how intelligent organisms should perceive similarity.

Modeling rapid language learning by distilling Bayesian priors into artificial neural networks

1 code implementation24 May 2023 R. Thomas McCoy, Thomas L. Griffiths

We show that learning from limited naturalistic data is possible with an approach that combines the strong inductive biases of a Bayesian model with the flexible representations of a neural network.

Tree of Thoughts: Deliberate Problem Solving with Large Language Models

3 code implementations NeurIPS 2023 Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan

Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference.

Decision Making Language Modelling

Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement

no code implementations20 Mar 2023 Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang

Object rearrangement is a challenge for embodied agents because solving these tasks requires generalizing across a combinatorially large set of configurations of entities and their locations.

Superhuman Artificial Intelligence Can Improve Human Decision Making by Increasing Novelty

no code implementations13 Mar 2023 Minkyu Shin, Jin Kim, Bas van Opheusden, Thomas L. Griffiths

We then examine human players' strategies across time and find that novel decisions (i. e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI.

counterfactual Decision Making

Large language models predict human sensory judgments across six modalities

no code implementations2 Feb 2023 Raja Marjieh, Ilia Sucholutsky, Pol van Rijn, Nori Jacoby, Thomas L. Griffiths

Determining the extent to which the perceptual world can be recovered from language is a longstanding problem in philosophy and cognitive science.

Philosophy

Humans decompose tasks by trading off utility and computational cost

no code implementations7 Nov 2022 Carlos G. Correa, Mark K. Ho, Frederick Callaway, Nathaniel D. Daw, Thomas L. Griffiths

Human behavior emerges from planning over elaborate decompositions of tasks into goals, subgoals, and low-level actions.

Analyzing Diffusion as Serial Reproduction

no code implementations29 Sep 2022 Raja Marjieh, Ilia Sucholutsky, Thomas A. Langlois, Nori Jacoby, Thomas L. Griffiths

Diffusion models are a class of generative models that learn to synthesize samples by inverting a diffusion process that gradually maps data into noise.

Scheduling

Bias amplification in experimental social networks is reduced by resampling

no code implementations15 Aug 2022 Mathew D. Hardy, Bill D. Thompson, P. M. Krafft, Thomas L. Griffiths

Large-scale social networks are thought to contribute to polarization by amplifying people's biases.

Decision Making

Gaussian Process Surrogate Models for Neural Networks

no code implementations11 Aug 2022 Michael Y. Li, Erin Grant, Thomas L. Griffiths

Not being able to understand and predict the behavior of deep learning systems makes it hard to decide what architecture and algorithm to use for a given problem.

Gaussian Processes

Predicting Word Learning in Children from the Performance of Computer Vision Systems

no code implementations7 Jul 2022 Sunayana Rane, Mira L. Nencheva, Zeyu Wang, Casey Lew-Williams, Olga Russakovsky, Thomas L. Griffiths

The performance of the computer vision systems is correlated with human judgments of the concreteness of words, which are in turn a predictor of children's word learning, suggesting that these models are capturing the relationship between words and visual phenomena.

Image Captioning

Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation

1 code implementation2 Jul 2022 Michael Chang, Thomas L. Griffiths, Sergey Levine

Iterative refinement -- start with a random guess, then iteratively improve the guess -- is a useful paradigm for representation learning because it offers a way to break symmetries among equally plausible explanations for the data.

Representation Learning

Words are all you need? Language as an approximation for human similarity judgments

no code implementations8 Jun 2022 Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Theodore R. Sumers, Harin Lee, Thomas L. Griffiths, Nori Jacoby

Based on the results of this comprehensive study, we provide a concise guide for researchers interested in collecting or approximating human similarity data.

Contrastive Learning Information Retrieval +2

Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines

1 code implementation23 May 2022 Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Y. Hu, Robert D. Hawkins, Nathaniel D. Daw, Jonathan D. Cohen, Karthik Narasimhan, Thomas L. Griffiths

Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.

Meta-Learning Meta Reinforcement Learning +2

Linguistic communication as (inverse) reward design

no code implementations11 Apr 2022 Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths, Dylan Hadfield-Menell

We then define a pragmatic listener which performs inverse reward design by jointly inferring the speaker's latent horizon and rewards.

Disentangling Abstraction from Statistical Pattern Matching in Human and Machine Learning

1 code implementation4 Apr 2022 Sreejan Kumar, Ishita Dasgupta, Nathaniel D. Daw, Jonathan D. Cohen, Thomas L. Griffiths

However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction.

BIG-bench Machine Learning Inductive Bias +4

Probing BERT's priors with serial reproduction chains

1 code implementation24 Feb 2022 Takateru Yamakoshi, Thomas L. Griffiths, Robert D. Hawkins

Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT.

Language Modelling Masked Language Modeling

Can Humans Do Less-Than-One-Shot Learning?

no code implementations9 Feb 2022 Maya Malaviya, Ilia Sucholutsky, Kerem Oktar, Thomas L. Griffiths

Being able to learn from small amounts of data is a key characteristic of human intelligence, but exactly {\em how} small?

One-Shot Learning

Predicting Human Similarity Judgments Using Large Language Models

no code implementations9 Feb 2022 Raja Marjieh, Ilia Sucholutsky, Theodore R. Sumers, Nori Jacoby, Thomas L. Griffiths

Similarity judgments provide a well-established method for accessing mental representations, with applications in psychology, neuroscience and machine learning.

A pragmatic account of the weak evidence effect

1 code implementation7 Dec 2021 Samuel A. Barnett, Thomas L. Griffiths, Robert D. Hawkins

Here, we extend recent probabilistic models of recursive social reasoning to allow for persuasive goals and show that our model provides a pragmatic account for why weakly favorable arguments may backfire, a phenomenon known as the weak evidence effect.

Decision Making

Distinguishing rule- and exemplar-based generalization in learning systems

1 code implementation8 Oct 2021 Ishita Dasgupta, Erin Grant, Thomas L. Griffiths

Machine learning systems often do not share the same inductive biases as humans and, as a result, extrapolate or generalize in ways that are inconsistent with our expectations.

BIG-bench Machine Learning Data Augmentation +2

Cognitive science as a source of forward and inverse models of human decisions for robotics and control

no code implementations1 Sep 2021 Mark K. Ho, Thomas L. Griffiths

Those designing autonomous systems that interact with humans will invariably face questions about how humans think and make decisions.

Decision Making

Passive Attention in Artificial Neural Networks Predicts Human Visual Selectivity

1 code implementation NeurIPS 2021 Thomas A. Langlois, H. Charles Zhao, Erin Grant, Ishita Dasgupta, Thomas L. Griffiths, Nori Jacoby

Similarly, we find that recognition performance in the same ANN models was likewise influenced by masking input images using human visual selectivity maps.

Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment

no code implementations ICLR Workshop Learning_to_Learn 2021 Michael Chang, Sidhant Kaushik, Sergey Levine, Thomas L. Griffiths

Empirical evidence suggests that such action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.

Decision Making Policy Gradient Methods +2

Extending rational models of communication from beliefs to actions

1 code implementation25 May 2021 Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths

Speakers communicate to influence their partner's beliefs and shape their actions.

Are Convolutional Neural Networks or Transformers more like human vision?

1 code implementation15 May 2021 Shikhar Tuli, Ishita Dasgupta, Erin Grant, Thomas L. Griffiths

Our focus is on comparing a suite of standard Convolutional Neural Networks (CNNs) and a recently-proposed attention-based network, the Vision Transformer (ViT), which relaxes the translation-invariance constraint of CNNs and therefore represents a model with a weaker set of inductive biases.

BIG-bench Machine Learning Object Recognition

People construct simplified mental representations to plan

no code implementations14 May 2021 Mark K. Ho, David Abel, Carlos G. Correa, Michael L. Littman, Jonathan D. Cohen, Thomas L. Griffiths

We propose a computational account of this simplification process and, in a series of pre-registered behavioral experiments, show that it is subject to online cognitive control and that people optimally balance the complexity of a task representation and its utility for planning and acting.

Shades of confusion: Lexical uncertainty modulates ad hoc coordination in an interactive communication task

no code implementations13 May 2021 Sonia K. Murthy, Thomas L. Griffiths, Robert D. Hawkins

There is substantial variability in the expectations that communication partners bring into interactions, creating the potential for misunderstandings.

From partners to populations: A hierarchical Bayesian account of coordination and convention

1 code implementation12 Apr 2021 Robert D. Hawkins, Michael Franke, Michael C. Frank, Adele E. Goldberg, Kenny Smith, Thomas L. Griffiths, Noah D. Goodman

Languages are powerful solutions to coordination problems: they provide stable, shared expectations about how the words we say correspond to the beliefs and intentions in our heads.

Continual Learning

Evaluating Models of Robust Word Recognition with Serial Reproduction

no code implementations24 Jan 2021 Stephan C. Meylan, Sathvik Nair, Thomas L. Griffiths

Spoken communication occurs in a "noisy channel" characterized by high levels of environmental noise, variability within and between speakers, and lexical and syntactic ambiguity.

Show or Tell? Demonstration is More Robust to Changes in Shared Perception than Explanation

no code implementations16 Dec 2020 Theodore R. Sumers, Mark K. Ho, Thomas L. Griffiths

Nonetheless, a teacher and learner may not always experience or attend to the same aspects of the environment.

Competition in Cross-situational Word Learning: A Computational Study

no code implementations6 Dec 2020 Aida Nematzadeh, Zahra Shekarchi, Thomas L. Griffiths, Suzanne Stevenson

Children learn word meanings by tapping into the commonalities across different situations in which words are used and overcome the high level of uncertainty involved in early word learning experiences.

Meta-Learning of Structured Task Distributions in Humans and Machines

1 code implementation ICLR 2021 Sreejan Kumar, Ishita Dasgupta, Jonathan D. Cohen, Nathaniel D. Daw, Thomas L. Griffiths

We then introduce a novel approach to constructing a "null task distribution" with the same statistical complexity as this structured task distribution but without the explicit rule-based structure used to generate the structured task.

Meta-Learning Meta Reinforcement Learning +2

Understanding Human Intelligence through Human Limitations

no code implementations29 Sep 2020 Thomas L. Griffiths

Recent progress in artificial intelligence provides the opportunity to ask the question of what is unique about human intelligence, but with a new comparison class.

Resource-rational Task Decomposition to Minimize Planning Costs

no code implementations27 Jul 2020 Carlos G. Correa, Mark K. Ho, Fred Callaway, Thomas L. Griffiths

That is, rather than planning over a monolithic representation of a task, they decompose the task into simpler subtasks and then plan to accomplish those.

End-to-end Deep Prototype and Exemplar Models for Predicting Human Behavior

no code implementations17 Jul 2020 Pulkit Singh, Joshua C. Peterson, Ruairidh M. Battleday, Thomas L. Griffiths

The stimulus representations employed in such models are either hand-designed by the experimenter, inferred circuitously from human judgments, or borrowed from pretrained deep neural networks that are themselves competing models of category learning.

Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions

no code implementations5 Jul 2020 Michael Chang, Sidhant Kaushik, S. Matthew Weinberg, Thomas L. Griffiths, Sergey Levine

This paper seeks to establish a framework for directing a society of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems.

Decision Making reinforcement-learning +2

Universal linguistic inductive biases via meta-learning

1 code implementation29 Jun 2020 R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, Tal Linzen

To facilitate computational modeling aimed at addressing this question, we introduce a framework for giving particular linguistic inductive biases to a neural network model; such a model can then be used to empirically explore the effects of those inductive biases.

Language Acquisition Meta-Learning

Analogy as Nonparametric Bayesian Inference over Relational Systems

no code implementations7 Jun 2020 Ruairidh M. Battleday, Thomas L. Griffiths

Much of human learning and inference can be framed within the computational problem of relational generalization.

Analogical Similarity Bayesian Inference

Extracting low-dimensional psychological representations from convolutional neural networks

no code implementations29 May 2020 Aditi Jha, Joshua Peterson, Thomas L. Griffiths

Deep neural networks are increasingly being used in cognitive modeling as a means of deriving representations for complex stimuli such as images.

The Efficiency of Human Cognition Reflects Planned Information Processing

no code implementations13 Feb 2020 Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths

Thus, people should plan their actions, but they should also be smart about how they deploy resources used for planning their actions.

Generalizing meanings from partners to populations: Hierarchical inference supports convention formation on networks

1 code implementation4 Feb 2020 Robert D. Hawkins, Noah D. Goodman, Adele E. Goldberg, Thomas L. Griffiths

A key property of linguistic conventions is that they hold over an entire community of speakers, allowing us to communicate efficiently even with people we have never met before.

Specificity

Scaling up Psychology via Scientific Regret Minimization: A Case Study in Moral Decisions

no code implementations16 Oct 2019 Mayank Agrawal, Joshua C. Peterson, Thomas L. Griffiths

In this paper, we offer a way to enable researchers to systematically build models and identify novel phenomena in large datasets.

Decision Making valid

On the Utility of Learning about Humans for Human-AI Coordination

2 code implementations NeurIPS 2019 Micah Carroll, Rohin Shah, Mark K. Ho, Thomas L. Griffiths, Sanjit A. Seshia, Pieter Abbeel, Anca Dragan

While we would like agents that can coordinate with humans, current algorithms such as self-play and population-based training create agents that can coordinate with themselves.

Human uncertainty makes classification more robust

no code implementations ICCV 2019 Joshua C. Peterson, Ruairidh M. Battleday, Thomas L. Griffiths, Olga Russakovsky

We then show that, while contemporary classifiers fail to exhibit human-like uncertainty on their own, explicit training on our dataset closes this gap, supports improved generalization to increasingly out-of-training-distribution test datasets, and confers robustness to adversarial attacks.

Classification General Classification

The Computational Structure of Unintentional Meaning

no code implementations3 Jun 2019 Mark K. Ho, Joanna Korman, Thomas L. Griffiths

Speech-acts can have literal meaning as well as pragmatic meaning, but these both involve consequences typically intended by a speaker.

Cognitive Model Priors for Predicting Human Decisions

no code implementations22 May 2019 David D. Bourgin, Joshua C. Peterson, Daniel Reichman, Thomas L. Griffiths, Stuart J. Russell

To solve this problem, what is needed are machine learning models with appropriate inductive biases for capturing human behavior, and larger datasets.

Benchmarking BIG-bench Machine Learning +2

Modulating transfer between tasks in gradient-based meta-learning

no code implementations ICLR 2019 Erin Grant, Ghassen Jerfel, Katherine Heller, Thomas L. Griffiths

Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task.

Inductive Bias Meta-Learning

Capturing human categorization of natural images at scale by combining deep networks and cognitive models

no code implementations26 Apr 2019 Ruairidh M. Battleday, Joshua C. Peterson, Thomas L. Griffiths

Human categorization is one of the most important and successful targets of cognitive modeling in psychology, yet decades of development and assessment of competing models have been contingent on small sets of simple, artificial experimental stimuli.

Predicting human decisions with behavioral theories and machine learning

no code implementations15 Apr 2019 Ori Plonsky, Reut Apel, Eyal Ert, Moshe Tennenholtz, David Bourgin, Joshua C. Peterson, Daniel Reichman, Thomas L. Griffiths, Stuart J. Russell, Evan C. Carter, James F. Cavanagh, Ido Erev

Here, we introduce BEAST Gradient Boosting (BEAST-GB), a novel hybrid model that synergizes behavioral theories, specifically the model BEAST, with machine learning techniques.

BIG-bench Machine Learning Descriptive

Using Machine Learning to Guide Cognitive Modeling: A Case Study in Moral Reasoning

no code implementations18 Feb 2019 Mayank Agrawal, Joshua C. Peterson, Thomas L. Griffiths

Large-scale behavioral datasets enable researchers to use complex machine learning algorithms to better predict human behavior, yet this increased predictive power does not always lead to a better understanding of the behavior in question.

BIG-bench Machine Learning Decision Making

Evaluating Theory of Mind in Question Answering

2 code implementations EMNLP 2018 Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, Thomas L. Griffiths

We propose a new dataset for evaluating question answering models with respect to their capacity to reason about beliefs.

Question Answering

Adaptive Sampling for Convex Regression

no code implementations14 Aug 2018 Max Simchowitz, Kevin Jamieson, Jordan W. Suchow, Thomas L. Griffiths

In this paper, we introduce the first principled adaptive-sampling procedure for learning a convex function in the $L_\infty$ norm, a problem that arises often in the behavioral and social sciences.

regression

Automatically Composing Representation Transformations as a Means for Generalization

1 code implementation ICLR 2019 Michael B. Chang, Abhishek Gupta, Sergey Levine, Thomas L. Griffiths

A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning -- either training a separate learner per task or training a single learner for all tasks -- both have difficulty with such generalization because they do not leverage the compositional structure of the task distribution.

Decision Making

Learning a face space for experiments on human identity

no code implementations19 May 2018 Jordan W. Suchow, Joshua C. Peterson, Thomas L. Griffiths

Generative models of human identity and appearance have broad applicability to behavioral science and technology, but the exquisite sensitivity of human face perception means that their utility hinges on the alignment of the model's representation to human psychological representations and the photorealism of the generated images.

Learning Hierarchical Visual Representations in Deep Neural Networks Using Hierarchical Linguistic Labels

no code implementations19 May 2018 Joshua C. Peterson, Paul Soulos, Aida Nematzadeh, Thomas L. Griffiths

Modern convolutional neural networks (CNNs) are able to achieve human-level object classification accuracy on specific tasks, and currently outperform competing models in explaining complex human visual representations.

Capturing human category representations by sampling in deep feature spaces

no code implementations19 May 2018 Joshua C. Peterson, Jordan W. Suchow, Krisha Aghi, Alexander Y. Ku, Thomas L. Griffiths

Here, we introduce a method to estimate the structure of human categories that combines ideas from cognitive science and machine learning, blending human-based algorithms with state-of-the-art deep image generators.

Generating Plans that Predict Themselves

no code implementations14 Feb 2018 Jaime F. Fisac, Chang Liu, Jessica B. Hamrick, S. Shankar Sastry, J. Karl Hedrick, Thomas L. Griffiths, Anca D. Dragan

We introduce $t$-\ACty{}: a measure that quantifies the accuracy and confidence with which human observers can predict the remaining robot plan from the overall task goal and the observed initial $t$ actions in the plan.

Learning to select computations

no code implementations18 Nov 2017 Frederick Callaway, Sayan Gul, Paul M. Krueger, Thomas L. Griffiths, Falk Lieder

The efficient use of limited computational resources is an essential ingredient of intelligence.

Management

Modeling Human Categorization of Natural Images Using Deep Feature Representations

no code implementations13 Nov 2017 Ruairidh M. Battleday, Joshua C. Peterson, Thomas L. Griffiths

Over the last few decades, psychologists have developed sophisticated formal models of human categorization using simple artificial stimuli.

Pragmatic-Pedagogic Value Alignment

no code implementations20 Jul 2017 Jaime F. Fisac, Monica A. Gates, Jessica B. Hamrick, Chang Liu, Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S. Shankar Sastry, Thomas L. Griffiths, Anca D. Dragan

In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users' objectives as they go.

Decision Making

Evaluating (and improving) the correspondence between deep neural networks and human representations

1 code implementation8 Jun 2017 Joshua C. Peterson, Joshua T. Abbott, Thomas L. Griffiths

These networks learn representations of real-world stimuli that can potentially be leveraged to capture psychological representations.

Evaluating vector-space models of analogy

no code implementations12 May 2017 Dawn Chen, Joshua C. Peterson, Thomas L. Griffiths

Vector-space representations provide geometric tools for reasoning about the similarity of a set of objects and their relationships.

Word Embeddings

A rational analysis of curiosity

no code implementations11 May 2017 Rachit Dubey, Thomas L. Griffiths

We present a rational analysis of curiosity, proposing that people's curiosity is driven by seeking stimuli that maximize their ability to make appropriate responses in the future.

Evidence for the size principle in semantic and perceptual domains

no code implementations9 May 2017 Joshua C. Peterson, Thomas L. Griffiths

Shepard's Universal Law of Generalization offered a compelling case for the first physics-like law in cognitive science that should hold for all intelligent agents in the universe.

Word forms - not just their lengths- are optimized for efficient communication

1 code implementation6 Mar 2017 Stephan C. Meylan, Thomas L. Griffiths

The inverse relationship between the length of a word and the frequency of its use, first identified by G. K. Zipf in 1935, is a classic empirical law that holds across a wide range of human languages.

Adapting Deep Network Features to Capture Psychological Representations

5 code implementations6 Aug 2016 Joshua C. Peterson, Joshua T. Abbott, Thomas L. Griffiths

To remedy this, we develop a method for adapting deep features to align with human similarity judgments, resulting in image representations that can potentially be used to extend the scope of psychological experiments.

Object Recognition Scene Understanding +1

Infant directed speech is consistent with teaching

no code implementations1 Jun 2016 Baxter S. Eaves Jr., Naomi H. Feldman, Thomas L. Griffiths, Patrick Shafto

We qualitatively compare the simulated teaching data with human IDS, finding that the teaching data exhibit many features of IDS, including some that have been taken as evidence IDS is not for teaching.

Human memory search as a random walk in a semantic network

no code implementations NeurIPS 2012 Joseph L. Austerweil, Joshua T. Abbott, Thomas L. Griffiths

The human mind has a remarkable ability to store a vast amount of information in memory, and an even more remarkable ability to retrieve these experiences when needed.

Information Retrieval Retrieval

An ideal observer model for identifying the reference frame of objects

no code implementations NeurIPS 2011 Joseph L. Austerweil, Abram L. Friesen, Thomas L. Griffiths

The object people perceive in an image can depend on its orientation relative to the scene it is in (its reference frame).

A rational model of causal inference with continuous causes

no code implementations NeurIPS 2011 Thomas L. Griffiths, Michael James

This model successfully predicts human judgments from previous studies better than models of discrete causal inference, and outperforms several other plausible models of causal induction with continuous causes in accounting for people's inferences in a new experiment.

Causal Inference

Learning invariant features using the Transformed Indian Buffet Process

no code implementations NeurIPS 2010 Joseph L. Austerweil, Thomas L. Griffiths

Identifying the features of objects becomes a challenge when those features can change in their appearance.

Nonparametric Latent Feature Models for Link Prediction

no code implementations NeurIPS 2009 Kurt Miller, Michael. I. Jordan, Thomas L. Griffiths

As the availability and importance of relational data -- such as the friendships summarized on a social networking website -- increases, it becomes increasingly important to have good models for such data.

Link Prediction

Neural Implementation of Hierarchical Bayesian Inference by Importance Sampling

no code implementations NeurIPS 2009 Lei Shi, Thomas L. Griffiths

Here we propose a simple mechanism for Bayesian inference which involves averaging over a few feature detection neurons which fire at a rate determined by their similarity to a sensory stimulus.

Bayesian Inference

Differential Use of Implicit Negative Evidence in Generative and Discriminative Language Learning

no code implementations NeurIPS 2009 Anne Hsu, Thomas L. Griffiths

The former analyses assume that learners seek to identify grammatical sentences in a way that is robust to the distribution from which the sentences are generated, analogous to discriminative approaches in machine learning.

Sentence

A rational model of preference learning and choice prediction by children

no code implementations NeurIPS 2008 Christopher G. Lucas, Thomas L. Griffiths, Fei Xu, Christine Fawcett

Young children demonstrate the ability to make inferences about the preferences of other agents based on their choices.

Modeling human function learning with Gaussian processes

no code implementations NeurIPS 2008 Thomas L. Griffiths, Chris Lucas, Joseph Williams, Michael L. Kalish

Accounts of how people learn functional relationships between continuous variables have tended to focus on two possibilities: that people are estimating explicit functions, or that they are simply performing associative learning supported by similarity.

Gaussian Processes regression

Analyzing human feature learning as nonparametric Bayesian inference

no code implementations NeurIPS 2008 Thomas L. Griffiths, Joseph L. Austerweil

Almost all successful machine learning algorithms and cognitive models require powerful representations capturing the features that are relevant to a particular problem.

Bayesian Inference BIG-bench Machine Learning

How memory biases affect information transmission: A rational analysis of serial reproduction

no code implementations NeurIPS 2008 Jing Xu, Thomas L. Griffiths

Many human interactions involve pieces of information being passed from one person to another, raising the question of how this process of information transmission is affected by the capacities of the agents involved.

Modeling the effects of memory on human online sentence processing with particle filters

no code implementations NeurIPS 2008 Roger P. Levy, Florencia Reali, Thomas L. Griffiths

Language comprehension in humans is significantly constrained by memory, yet rapid, highly incremental, and capable of utilizing a wide range of contextual information to resolve ambiguity and form expectations about future input.

Sentence

A Probabilistic Approach to Language Change

no code implementations NeurIPS 2007 Alexandre Bouchard-Côté, Percy S. Liang, Dan Klein, Thomas L. Griffiths

We present a probabilistic approach to language change in which word forms are represented by phoneme sequences that undergo stochastic edits along the branches of a phylogenetic tree.

Cannot find the paper you are looking for? You can Submit a new open access paper.