no code implementations • ICLR 2019 • Ruairidh M. Battleday, Joshua C. Peterson, Thomas L. Griffiths
As deep CNN classifier performance using ground-truth labels has begun to asymptote at near-perfect levels, a key aim for the field is to extend training paradigms to capture further useful structure in natural image data and improve model robustness and generalization.
no code implementations • 31 Oct 2024 • Declan Campbell, Sunayana Rane, Tyler Giallanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffiths, Jonathan D. Cohen, Taylor W. Webb
Recent work has documented striking heterogeneity in the performance of state-of-the-art vision language models (VLMs), including both multimodal language models and text-to-image models.
no code implementations • 27 Oct 2024 • Ryan Liu, Jiayi Geng, Addison J. Wu, Ilia Sucholutsky, Tania Lombrozo, Thomas L. Griffiths
In this paper, we seek to identify the characteristics of tasks where CoT reduces performance by drawing inspiration from cognitive psychology, looking at cases where (i) verbal thinking or deliberation hurts performance in humans, and (ii) the constraints governing human performance generalize to language models.
1 code implementation • 26 Oct 2024 • Marcel Binz, Elif Akata, Matthias Bethge, Franziska Brändle, Fred Callaway, Julian Coda-Forno, Peter Dayan, Can Demircan, Maria K. Eckstein, Noémi Éltető, Thomas L. Griffiths, Susanne Haridi, Akshay K. Jagadish, Li Ji-An, Alexander Kipnis, Sreejan Kumar, Tobias Ludwig, Marvin Mathony, Marcelo Mattar, Alireza Modirshanechi, Surabhi S. Nath, Joshua C. Peterson, Milena Rmus, Evan M. Russek, Tankred Saanum, Natalia Scharfenberg, Johannes A. Schubert, Luca M. Schulze Buschoff, Nishad Singhi, Xin Sui, Mirko Thalmann, Fabian Theis, Vuong Truong, Vishaal Udandarao, Konstantinos Voudouris, Robert Wilson, Kristin Witte, Shuchen Wu, Dirk Wulff, Huadong Xiong, Eric Schulz
Establishing a unified theory of cognition has been a major goal of psychology.
1 code implementation • 14 Oct 2024 • Yiming Zuo, Karhan Kayan, Maggie Wang, Kevin Jeon, Jia Deng, Thomas L. Griffiths
Building a foundation model for 3D vision is a complex challenge that remains unsolved.
no code implementations • 7 Oct 2024 • C. Nicolò De Sabbata, Theodore R. Sumers, Thomas L. Griffiths
Being prompted to engage in reasoning has emerged as a core technique for using large language models (LLMs), deploying additional inference-time compute to improve task performance.
no code implementations • 2 Oct 2024 • R. Thomas McCoy, Shunyu Yao, Dan Friedman, Mathew D. Hardy, Thomas L. Griffiths
In "Embers of Autoregression" (McCoy et al., 2023), we showed that several large language models (LLMs) have some important limitations that are attributable to their origins in next-word prediction.
no code implementations • 15 Aug 2024 • Jian-Qiao Zhu, Joshua C. Peterson, Benjamin Enke, Thomas L. Griffiths
Understanding how people behave in strategic settings--where they make decisions based on their expectations about the behavior of others--is a long-standing problem in the behavioral sciences.
1 code implementation • 1 Jul 2024 • Akshara Prabhakar, Thomas L. Griffiths, R. Thomas McCoy
By focusing on a single relatively simple task, we are able to identify three factors that systematically affect CoT performance: the probability of the task's expected output (probability), what the model has implicitly learned during pre-training (memorization), and the number of intermediate operations involved in reasoning (noisy reasoning).
1 code implementation • 24 Jun 2024 • Ryan Liu, Jiayi Geng, Joshua C. Peterson, Ilia Sucholutsky, Thomas L. Griffiths
However, by comparing LLM behavior and predictions to a large dataset of human decisions, we find that this is actually not the case: when both simulating and predicting people's choices, a suite of cutting-edge LLMs (GPT-4o & 4-Turbo, Llama-3-8B & 70B, Claude 3 Opus) assume that people are more rational than we really are.
no code implementations • 6 Jun 2024 • Liyi Zhang, Michael Y. Li, Thomas L. Griffiths
The embeddings from large language models have been shown to capture aspects of the syntax and semantics of language.
no code implementations • 6 Jun 2024 • Ilia Sucholutsky, Katherine M. Collins, Maya Malaviya, Nori Jacoby, Weiyang Liu, Theodore R. Sumers, Michalis Korakakis, Umang Bhatt, Mark Ho, Joshua B. Tenenbaum, Brad Love, Zachary A. Pardos, Adrian Weller, Thomas L. Griffiths
A good teacher should not only be knowledgeable; but should be able to communicate in a way that the student understands -- to share the student's representation of the world.
1 code implementation • 4 Jun 2024 • Liyi Zhang, Logan Nelson, Thomas L. Griffiths
We examine the benefits of prototype-based representations in a less-studied domain: semi-supervised learning, where agents must form unsupervised representations of stimuli before receiving category labels.
no code implementations • 4 Jun 2024 • Jian-Qiao Zhu, Thomas L. Griffiths
We validated our method in settings where iterated learning has previously been used to estimate the priors of human participants -- causal learning, proportion estimation, and predicting everyday quantities.
no code implementations • 29 May 2024 • Raja Marjieh, Sreejan Kumar, Declan Campbell, Liyi Zhang, Gianluca Bencomo, Jake Snell, Thomas L. Griffiths
Humans rely on strong inductive biases to learn from few examples and abstract useful information from sensory data.
no code implementations • 29 May 2024 • Jian-Qiao Zhu, Haijiang Yan, Thomas L. Griffiths
The observed similarities in the behavior of humans and Large Language Models (LLMs) have prompted researchers to consider the potential of using LLMs as models of human cognition.
1 code implementation • 19 Mar 2024 • Xudong Guo, Kaixuan Huang, Jiale Liu, Wenhui Fan, Natalia Vélez, Qingyun Wu, Huazheng Wang, Thomas L. Griffiths, Mengdi Wang
Large Language Models (LLMs) have emerged as integral tools for reasoning, planning, and decision-making, drawing upon their extensive world knowledge and proficiency in language-related tasks.
no code implementations • 28 Feb 2024 • Andi Peng, Ilia Sucholutsky, Belinda Z. Li, Theodore R. Sumers, Thomas L. Griffiths, Jacob Andreas, Julie A. Shah
We describe a framework for using natural language to design state abstractions for imitation learning.
no code implementations • 26 Feb 2024 • Carlos G. Correa, Thomas L. Griffiths, Nathaniel D. Daw
Typical models of learning assume incremental estimation of continuously-varying decision variables like expected rewards.
no code implementations • 15 Feb 2024 • Allison Chen, Ilia Sucholutsky, Olga Russakovsky, Thomas L. Griffiths
Does language help make sense of the visual world?
no code implementations • 11 Feb 2024 • Ryan Liu, Theodore R. Sumers, Ishita Dasgupta, Thomas L. Griffiths
In day-to-day communication, people often approximate the truth - for example, rounding the time or omitting details - in order to be maximally helpful to the listener.
no code implementations • 10 Feb 2024 • Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Harin Lee, Thomas L. Griffiths, Nori Jacoby
Here we provide a formal account of this phenomenon, by recasting it as a statistical inference whereby a rational agent attempts to decide whether a sequence of utterances is more likely to have been produced in a song or speech.
no code implementations • 10 Feb 2024 • Ioana Marinescu, R. Thomas McCoy, Thomas L. Griffiths
Is it possible to create a neural network that displays the same inductive biases?
1 code implementation • 6 Feb 2024 • Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Thomas L. Griffiths
Large language models (LLMs) can pass explicit social bias tests but still harbor implicit biases, similar to humans who endorse egalitarian beliefs yet exhibit subtle biases.
no code implementations • 6 Feb 2024 • Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden Lake, Thomas L. Griffiths
To investigate the effect language on the formation of abstractions, we implement a novel multimodal serial reproduction framework by asking people who receive a visual stimulus to reproduce it in a linguistic format, and vice versa.
no code implementations • 6 Feb 2024 • Declan Campbell, Sreejan Kumar, Tyler Giallanza, Thomas L. Griffiths, Jonathan D. Cohen
Humans possess a remarkable capacity to recognize and manipulate abstract structure, which is especially apparent in the domain of geometry.
no code implementations • 5 Feb 2024 • Andi Peng, Andreea Bobu, Belinda Z. Li, Theodore R. Sumers, Ilia Sucholutsky, Nishanth Kumar, Thomas L. Griffiths, Julie A. Shah
We observe that how humans behave reveals how they see the world.
no code implementations • 30 Jan 2024 • Jian-Qiao Zhu, Thomas L. Griffiths
Autoregressive Large Language Models (LLMs) trained for next-word prediction have demonstrated remarkable proficiency at producing coherent text.
no code implementations • 30 Jan 2024 • Jian-Qiao Zhu, Haijiang Yan, Thomas L. Griffiths
Simulating sampling algorithms with people has proven a useful method for efficiently probing and understanding their mental representations.
no code implementations • 9 Jan 2024 • Sunayana Rane, Polyphony J. Bruna, Ilia Sucholutsky, Christopher Kello, Thomas L. Griffiths
Discussion of AI alignment (alignment between humans and AI systems) has focused on value alignment, broadly referring to creating AI systems that share human values.
no code implementations • 21 Dec 2023 • Andrea Wynn, Ilia Sucholutsky, Thomas L. Griffiths
Using a set of textual action descriptions, we collect value judgments from humans, as well as similarity judgments from both humans and multiple language models, and demonstrate that representational alignment enables both safe exploration and improved generalization when learning human values.
no code implementations • 21 Dec 2023 • Liyi Zhang, R. Thomas McCoy, Theodore R. Sumers, Jian-Qiao Zhu, Thomas L. Griffiths
Large language models (LLMs) can produce long, coherent passages of text, suggesting that LLMs, although trained on next-word prediction, must represent the latent structure that characterizes a document.
2 code implementations • 13 Dec 2023 • Qihong Lu, Tan T. Nguyen, Qiong Zhang, Uri Hasson, Thomas L. Griffiths, Jeffrey M. Zacks, Samuel J. Gershman, Kenneth A. Norman
Through learning, it naturally stores structure that is shared across tasks in the network weights.
2 code implementations • 30 Nov 2023 • Carlos G. Correa, Sophia Sanborn, Mark K. Ho, Frederick Callaway, Nathaniel D. Daw, Thomas L. Griffiths
Human behavior is often assumed to be hierarchically structured, made up of abstract actions that can be decomposed into concrete actions.
1 code implementation • 24 Nov 2023 • Jake C. Snell, Gianluca Bencomo, Thomas L. Griffiths
Most applications of machine learning to classification assume a closed set of balanced classes.
1 code implementation • 17 Nov 2023 • Gianluca M. Bencomo, Jake C. Snell, Thomas L. Griffiths
Bayesian filtering approximates the true underlying behavior of a time-varying system by inverting an explicit generative model to convert noisy measurements into state estimates.
no code implementations • 16 Nov 2023 • Thomas L. Griffiths, Jian-Qiao Zhu, Erin Grant, R. Thomas McCoy
The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference.
1 code implementation • 16 Nov 2023 • Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas L. Griffiths, Faeze Brahman
We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting.
1 code implementation • 1 Nov 2023 • Ryan Liu, Howard Yen, Raja Marjieh, Thomas L. Griffiths, Ranjay Krishna
How do we communicate with others to achieve our goals?
no code implementations • 30 Oct 2023 • Sunayana Rane, Mark Ho, Ilia Sucholutsky, Thomas L. Griffiths
Value alignment is essential for building AI systems that can safely and reliably interact with people.
1 code implementation • 18 Oct 2023 • Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Christopher J. Cueva, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nathan Cloos, Nikolaus Kriegeskorte, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas P. O'Connell, Thomas Unterthiner, Andrew K. Lampinen, Klaus-Robert Müller, Mariya Toneva, Thomas L. Griffiths
These questions pertaining to the study of representational alignment are at the heart of some of the most promising research areas in contemporary cognitive science, neuroscience, and machine learning.
no code implementations • 3 Oct 2023 • Kerem Oktar, Ilia Sucholutsky, Tania Lombrozo, Thomas L. Griffiths
The increasing prevalence of artificial agents creates a correspondingly increasing need to manage disagreements between humans and artificial agents, as well as between artificial agents themselves.
no code implementations • 3 Oct 2023 • Ruiqi He, Carlos G. Correa, Thomas L. Griffiths, Mark K. Ho
How are people able to plan so efficiently despite limited cognitive resources?
no code implementations • 29 Sep 2023 • Declan Campbell, Sreejan Kumar, Tyler Giallanza, Jonathan D. Cohen, Thomas L. Griffiths
Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors.
no code implementations • 24 Sep 2023 • R. Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, Thomas L. Griffiths
This approach - which we call the teleological approach - leads us to identify three factors that we hypothesize will influence LLM accuracy: the probability of the task to be performed, the probability of the target output, and the probability of the provided input.
2 code implementations • 5 Sep 2023 • Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
Recent efforts have augmented large language models (LLMs) with external resources (e. g., the Internet) or internal control flows (e. g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents.
no code implementations • 14 Jun 2023 • Raja Marjieh, Nori Jacoby, Joshua C. Peterson, Thomas L. Griffiths
Shepard's universal law of generalization is a remarkable hypothesis about how intelligent organisms should perceive similarity.
1 code implementation • 24 May 2023 • R. Thomas McCoy, Thomas L. Griffiths
We show that learning from limited naturalistic data is possible with an approach that combines the strong inductive biases of a Bayesian model with the flexible representations of a neural network.
5 code implementations • NeurIPS 2023 • Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference.
Ranked #3 on Arithmetic Reasoning on Game of 24
no code implementations • 20 Mar 2023 • Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
Object rearrangement is a challenge for embodied agents because solving these tasks requires generalizing across a combinatorially large set of configurations of entities and their locations.
no code implementations • 13 Mar 2023 • Minkyu Shin, Jin Kim, Bas van Opheusden, Thomas L. Griffiths
We then examine human players' strategies across time and find that novel decisions (i. e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI.
no code implementations • 2 Feb 2023 • Raja Marjieh, Ilia Sucholutsky, Pol van Rijn, Nori Jacoby, Thomas L. Griffiths
Determining the extent to which the perceptual world can be recovered from language is a longstanding problem in philosophy and cognitive science.
no code implementations • 7 Nov 2022 • Carlos G. Correa, Mark K. Ho, Frederick Callaway, Nathaniel D. Daw, Thomas L. Griffiths
Human behavior emerges from planning over elaborate decompositions of tasks into goals, subgoals, and low-level actions.
no code implementations • 2 Nov 2022 • Ilia Sucholutsky, Ruairidh M. Battleday, Katherine M. Collins, Raja Marjieh, Joshua C. Peterson, Pulkit Singh, Umang Bhatt, Nori Jacoby, Adrian Weller, Thomas L. Griffiths
Supervised learning typically focuses on learning transferable representations from training examples annotated by humans.
no code implementations • 29 Sep 2022 • Raja Marjieh, Ilia Sucholutsky, Thomas A. Langlois, Nori Jacoby, Thomas L. Griffiths
Diffusion models are a class of generative models that learn to synthesize samples by inverting a diffusion process that gradually maps data into noise.
no code implementations • 15 Aug 2022 • Mathew D. Hardy, Bill D. Thompson, P. M. Krafft, Thomas L. Griffiths
Large-scale social networks are thought to contribute to polarization by amplifying people's biases.
no code implementations • 11 Aug 2022 • Michael Y. Li, Erin Grant, Thomas L. Griffiths
Not being able to understand and predict the behavior of deep learning systems makes it hard to decide what architecture and algorithm to use for a given problem.
no code implementations • 7 Jul 2022 • Sunayana Rane, Mira L. Nencheva, Zeyu Wang, Casey Lew-Williams, Olga Russakovsky, Thomas L. Griffiths
The performance of the computer vision systems is correlated with human judgments of the concreteness of words, which are in turn a predictor of children's word learning, suggesting that these models are capturing the relationship between words and visual phenomena.
1 code implementation • 2 Jul 2022 • Michael Chang, Thomas L. Griffiths, Sergey Levine
Iterative refinement -- start with a random guess, then iteratively improve the guess -- is a useful paradigm for representation learning because it offers a way to break symmetries among equally plausible explanations for the data.
no code implementations • 8 Jun 2022 • Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Theodore R. Sumers, Harin Lee, Thomas L. Griffiths, Nori Jacoby
Based on the results of this comprehensive study, we provide a concise guide for researchers interested in collecting or approximating human similarity data.
1 code implementation • 23 May 2022 • Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Y. Hu, Robert D. Hawkins, Nathaniel D. Daw, Jonathan D. Cohen, Karthik Narasimhan, Thomas L. Griffiths
Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.
no code implementations • 11 Apr 2022 • Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths, Dylan Hadfield-Menell
We then define a pragmatic listener which performs inverse reward design by jointly inferring the speaker's latent horizon and rewards.
1 code implementation • 4 Apr 2022 • Sreejan Kumar, Ishita Dasgupta, Nathaniel D. Daw, Jonathan D. Cohen, Thomas L. Griffiths
However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction.
1 code implementation • 24 Feb 2022 • Takateru Yamakoshi, Thomas L. Griffiths, Robert D. Hawkins
Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT.
no code implementations • 9 Feb 2022 • Maya Malaviya, Ilia Sucholutsky, Kerem Oktar, Thomas L. Griffiths
Being able to learn from small amounts of data is a key characteristic of human intelligence, but exactly {\em how} small?
no code implementations • 9 Feb 2022 • Raja Marjieh, Ilia Sucholutsky, Theodore R. Sumers, Nori Jacoby, Thomas L. Griffiths
Similarity judgments provide a well-established method for accessing mental representations, with applications in psychology, neuroscience and machine learning.
1 code implementation • 7 Dec 2021 • Samuel A. Barnett, Thomas L. Griffiths, Robert D. Hawkins
Here, we extend recent probabilistic models of recursive social reasoning to allow for persuasive goals and show that our model provides a pragmatic account for why weakly favorable arguments may backfire, a phenomenon known as the weak evidence effect.
no code implementations • NeurIPS Workshop SVRHM 2021 • Karan Grewal, Joshua Peterson, Bill D Thompson, Thomas L. Griffiths
Human semantic representations are both difficult to capture and hard to fully interpret.
1 code implementation • 8 Oct 2021 • Ishita Dasgupta, Erin Grant, Thomas L. Griffiths
Machine learning systems often do not share the same inductive biases as humans and, as a result, extrapolate or generalize in ways that are inconsistent with our expectations.
no code implementations • 1 Sep 2021 • Mark K. Ho, Thomas L. Griffiths
Those designing autonomous systems that interact with humans will invariably face questions about how humans think and make decisions.
1 code implementation • NeurIPS 2021 • Thomas A. Langlois, H. Charles Zhao, Erin Grant, Ishita Dasgupta, Thomas L. Griffiths, Nori Jacoby
Similarly, we find that recognition performance in the same ANN models was likewise influenced by masking input images using human visual selectivity maps.
no code implementations • ICLR Workshop Learning_to_Learn 2021 • Michael Chang, Sidhant Kaushik, Sergey Levine, Thomas L. Griffiths
Empirical evidence suggests that such action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
1 code implementation • 25 May 2021 • Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths
Speakers communicate to influence their partner's beliefs and shape their actions.
1 code implementation • 15 May 2021 • Shikhar Tuli, Ishita Dasgupta, Erin Grant, Thomas L. Griffiths
Our focus is on comparing a suite of standard Convolutional Neural Networks (CNNs) and a recently-proposed attention-based network, the Vision Transformer (ViT), which relaxes the translation-invariance constraint of CNNs and therefore represents a model with a weaker set of inductive biases.
no code implementations • 14 May 2021 • Mark K. Ho, David Abel, Carlos G. Correa, Michael L. Littman, Jonathan D. Cohen, Thomas L. Griffiths
We propose a computational account of this simplification process and, in a series of pre-registered behavioral experiments, show that it is subject to online cognitive control and that people optimally balance the complexity of a task representation and its utility for planning and acting.
no code implementations • 13 May 2021 • Sonia K. Murthy, Thomas L. Griffiths, Robert D. Hawkins
There is substantial variability in the expectations that communication partners bring into interactions, creating the potential for misunderstandings.
1 code implementation • 12 Apr 2021 • Robert D. Hawkins, Michael Franke, Michael C. Frank, Adele E. Goldberg, Kenny Smith, Thomas L. Griffiths, Noah D. Goodman
Languages are powerful solutions to coordination problems: they provide stable, shared expectations about how the words we say correspond to the beliefs and intentions in our heads.
no code implementations • 24 Jan 2021 • Stephan C. Meylan, Sathvik Nair, Thomas L. Griffiths
Spoken communication occurs in a "noisy channel" characterized by high levels of environmental noise, variability within and between speakers, and lexical and syntactic ambiguity.
no code implementations • 16 Dec 2020 • Theodore R. Sumers, Mark K. Ho, Thomas L. Griffiths
Nonetheless, a teacher and learner may not always experience or attend to the same aspects of the environment.
no code implementations • 6 Dec 2020 • Aida Nematzadeh, Zahra Shekarchi, Thomas L. Griffiths, Suzanne Stevenson
Children learn word meanings by tapping into the commonalities across different situations in which words are used and overcome the high level of uncertainty involved in early word learning experiences.
1 code implementation • ICLR 2021 • Sreejan Kumar, Ishita Dasgupta, Jonathan D. Cohen, Nathaniel D. Daw, Thomas L. Griffiths
We then introduce a novel approach to constructing a "null task distribution" with the same statistical complexity as this structured task distribution but without the explicit rule-based structure used to generate the structured task.
1 code implementation • EMNLP 2020 • Robert D. Hawkins, Takateru Yamakoshi, Thomas L. Griffiths, Adele E. Goldberg
Languages typically provide more than one grammatical construction to express certain types of messages.
1 code implementation • 30 Sep 2020 • Theodore R. Sumers, Mark K. Ho, Robert D. Hawkins, Karthik Narasimhan, Thomas L. Griffiths
The sentiment models outperform the inference network, with the "pragmatic" model approaching human performance.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3
no code implementations • 29 Sep 2020 • Thomas L. Griffiths
Recent progress in artificial intelligence provides the opportunity to ask the question of what is unique about human intelligence, but with a new comparison class.
no code implementations • 27 Jul 2020 • Carlos G. Correa, Mark K. Ho, Fred Callaway, Thomas L. Griffiths
That is, rather than planning over a monolithic representation of a task, they decompose the task into simpler subtasks and then plan to accomplish those.
no code implementations • 17 Jul 2020 • Pulkit Singh, Joshua C. Peterson, Ruairidh M. Battleday, Thomas L. Griffiths
The stimulus representations employed in such models are either hand-designed by the experimenter, inferred circuitously from human judgments, or borrowed from pretrained deep neural networks that are themselves competing models of category learning.
no code implementations • 5 Jul 2020 • Michael Chang, Sidhant Kaushik, S. Matthew Weinberg, Thomas L. Griffiths, Sergey Levine
This paper seeks to establish a framework for directing a society of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems.
1 code implementation • 29 Jun 2020 • R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, Tal Linzen
To facilitate computational modeling aimed at addressing this question, we introduce a framework for giving particular linguistic inductive biases to a neural network model; such a model can then be used to empirically explore the effects of those inductive biases.
no code implementations • 7 Jun 2020 • Ruairidh M. Battleday, Thomas L. Griffiths
Much of human learning and inference can be framed within the computational problem of relational generalization.
no code implementations • 29 May 2020 • Aditi Jha, Joshua Peterson, Thomas L. Griffiths
Deep neural networks are increasingly being used in cognitive modeling as a means of deriving representations for complex stimuli such as images.
no code implementations • 13 Feb 2020 • Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths
Thus, people should plan their actions, but they should also be smart about how they deploy resources used for planning their actions.
1 code implementation • 4 Feb 2020 • Robert D. Hawkins, Noah D. Goodman, Adele E. Goldberg, Thomas L. Griffiths
A key property of linguistic conventions is that they hold over an entire community of speakers, allowing us to communicate efficiently even with people we have never met before.
no code implementations • 16 Oct 2019 • Mayank Agrawal, Joshua C. Peterson, Thomas L. Griffiths
In this paper, we offer a way to enable researchers to systematically build models and identify novel phenomena in large datasets.
2 code implementations • NeurIPS 2019 • Micah Carroll, Rohin Shah, Mark K. Ho, Thomas L. Griffiths, Sanjit A. Seshia, Pieter Abbeel, Anca Dragan
While we would like agents that can coordinate with humans, current algorithms such as self-play and population-based training create agents that can coordinate with themselves.
no code implementations • ICCV 2019 • Joshua C. Peterson, Ruairidh M. Battleday, Thomas L. Griffiths, Olga Russakovsky
We then show that, while contemporary classifiers fail to exhibit human-like uncertainty on their own, explicit training on our dataset closes this gap, supports improved generalization to increasingly out-of-training-distribution test datasets, and confers robustness to adversarial attacks.
no code implementations • 3 Jun 2019 • Mark K. Ho, Joanna Korman, Thomas L. Griffiths
Speech-acts can have literal meaning as well as pragmatic meaning, but these both involve consequences typically intended by a speaker.
no code implementations • 22 May 2019 • David D. Bourgin, Joshua C. Peterson, Daniel Reichman, Thomas L. Griffiths, Stuart J. Russell
To solve this problem, what is needed are machine learning models with appropriate inductive biases for capturing human behavior, and larger datasets.
no code implementations • ICLR 2019 • Erin Grant, Ghassen Jerfel, Katherine Heller, Thomas L. Griffiths
Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task.
no code implementations • 26 Apr 2019 • Ruairidh M. Battleday, Joshua C. Peterson, Thomas L. Griffiths
Human categorization is one of the most important and successful targets of cognitive modeling in psychology, yet decades of development and assessment of competing models have been contingent on small sets of simple, artificial experimental stimuli.
no code implementations • 15 Apr 2019 • Ori Plonsky, Reut Apel, Eyal Ert, Moshe Tennenholtz, David Bourgin, Joshua C. Peterson, Daniel Reichman, Thomas L. Griffiths, Stuart J. Russell, Evan C. Carter, James F. Cavanagh, Ido Erev
Here, we introduce BEAST Gradient Boosting (BEAST-GB), a novel hybrid model that synergizes behavioral theories, specifically the model BEAST, with machine learning techniques.
no code implementations • 18 Feb 2019 • Mayank Agrawal, Joshua C. Peterson, Thomas L. Griffiths
Large-scale behavioral datasets enable researchers to use complex machine learning algorithms to better predict human behavior, yet this increased predictive power does not always lead to a better understanding of the behavior in question.
no code implementations • NeurIPS 2019 • Ghassen Jerfel, Erin Grant, Thomas L. Griffiths, Katherine Heller
Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task.
3 code implementations • EMNLP 2018 • Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, Thomas L. Griffiths
We propose a new dataset for evaluating question answering models with respect to their capacity to reason about beliefs.
no code implementations • 14 Aug 2018 • Max Simchowitz, Kevin Jamieson, Jordan W. Suchow, Thomas L. Griffiths
In this paper, we introduce the first principled adaptive-sampling procedure for learning a convex function in the $L_\infty$ norm, a problem that arises often in the behavioral and social sciences.
no code implementations • 8 Aug 2018 • Pengfei Liu, Ji Zhang, Cane Wing-Ki Leung, Chao He, Thomas L. Griffiths
Effective representation of a text is critical for various natural language processing tasks.
no code implementations • 18 Jul 2018 • Sophia Sanborn, David D. Bourgin, Michael Chang, Thomas L. Griffiths
The importance of hierarchically structured representations for tractable planning has long been acknowledged.
1 code implementation • ICLR 2019 • Michael B. Chang, Abhishek Gupta, Sergey Levine, Thomas L. Griffiths
A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning -- either training a separate learner per task or training a single learner for all tasks -- both have difficulty with such generalization because they do not leverage the compositional structure of the task distribution.
no code implementations • 19 May 2018 • Jordan W. Suchow, Joshua C. Peterson, Thomas L. Griffiths
Generative models of human identity and appearance have broad applicability to behavioral science and technology, but the exquisite sensitivity of human face perception means that their utility hinges on the alignment of the model's representation to human psychological representations and the photorealism of the generated images.
no code implementations • 19 May 2018 • Joshua C. Peterson, Paul Soulos, Aida Nematzadeh, Thomas L. Griffiths
Modern convolutional neural networks (CNNs) are able to achieve human-level object classification accuracy on specific tasks, and currently outperform competing models in explaining complex human visual representations.
no code implementations • 19 May 2018 • Joshua C. Peterson, Jordan W. Suchow, Krisha Aghi, Alexander Y. Ku, Thomas L. Griffiths
Here, we introduce a method to estimate the structure of human categories that combines ideas from cognitive science and machine learning, blending human-based algorithms with state-of-the-art deep image generators.
1 code implementation • ICML 2018 • Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Thomas L. Griffiths, Alexei A. Efros
What makes humans so good at solving seemingly complex video games?
no code implementations • 14 Feb 2018 • Jaime F. Fisac, Chang Liu, Jessica B. Hamrick, S. Shankar Sastry, J. Karl Hedrick, Thomas L. Griffiths, Anca D. Dragan
We introduce $t$-\ACty{}: a measure that quantifies the accuracy and confidence with which human observers can predict the remaining robot plan from the overall task goal and the observed initial $t$ actions in the plan.
no code implementations • 6 Feb 2018 • Chang Liu, Jessica B. Hamrick, Jaime F. Fisac, Anca D. Dragan, J. Karl Hedrick, S. Shankar Sastry, Thomas L. Griffiths
The study of human-robot interaction is fundamental to the design and use of robotics in real-world applications.
no code implementations • 18 Nov 2017 • Frederick Callaway, Sayan Gul, Paul M. Krueger, Thomas L. Griffiths, Falk Lieder
The efficient use of limited computational resources is an essential ingredient of intelligence.
no code implementations • 13 Nov 2017 • Ruairidh M. Battleday, Joshua C. Peterson, Thomas L. Griffiths
Over the last few decades, psychologists have developed sophisticated formal models of human categorization using simple artificial stimuli.
no code implementations • 20 Jul 2017 • Jaime F. Fisac, Monica A. Gates, Jessica B. Hamrick, Chang Liu, Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S. Shankar Sastry, Thomas L. Griffiths, Anca D. Dragan
In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users' objectives as they go.
1 code implementation • 8 Jun 2017 • Joshua C. Peterson, Joshua T. Abbott, Thomas L. Griffiths
These networks learn representations of real-world stimuli that can potentially be leveraged to capture psychological representations.
no code implementations • 12 May 2017 • Dawn Chen, Joshua C. Peterson, Thomas L. Griffiths
Vector-space representations provide geometric tools for reasoning about the similarity of a set of objects and their relationships.
no code implementations • 11 May 2017 • Rachit Dubey, Thomas L. Griffiths
We present a rational analysis of curiosity, proposing that people's curiosity is driven by seeking stimuli that maximize their ability to make appropriate responses in the future.
no code implementations • 9 May 2017 • Joshua C. Peterson, Thomas L. Griffiths
Shepard's Universal Law of Generalization offered a compelling case for the first physics-like law in cognitive science that should hold for all intelligent agents in the universe.
1 code implementation • 6 Mar 2017 • Stephan C. Meylan, Thomas L. Griffiths
The inverse relationship between the length of a word and the frequency of its use, first identified by G. K. Zipf in 1935, is a classic empirical law that holds across a wide range of human languages.
no code implementations • 14 Sep 2016 • Neil R. Bramley, Peter Dayan, Thomas L. Griffiths, David A. Lagnado
Higher-level cognition depends on the ability to learn models of the world.
5 code implementations • 6 Aug 2016 • Joshua C. Peterson, Joshua T. Abbott, Thomas L. Griffiths
To remedy this, we develop a method for adapting deep features to align with human similarity judgments, resulting in image representations that can potentially be used to extend the scope of psychological experiments.
no code implementations • 1 Jun 2016 • Baxter S. Eaves Jr., Naomi H. Feldman, Thomas L. Griffiths, Patrick Shafto
We qualitatively compare the simulated teaching data with human IDS, finding that the teaching data exhibit many features of IDS, including some that have been taken as evidence IDS is not for teaching.
no code implementations • NeurIPS 2012 • Joseph L. Austerweil, Joshua T. Abbott, Thomas L. Griffiths
The human mind has a remarkable ability to store a vast amount of information in memory, and an even more remarkable ability to retrieve these experiences when needed.
no code implementations • NeurIPS 2011 • Joshua T. Abbott, Katherine A. Heller, Zoubin Ghahramani, Thomas L. Griffiths
How do people determine which elements of a set are most representative of that set?
no code implementations • NeurIPS 2011 • Joseph L. Austerweil, Abram L. Friesen, Thomas L. Griffiths
The object people perceive in an image can depend on its orientation relative to the scene it is in (its reference frame).
no code implementations • NeurIPS 2011 • Thomas L. Griffiths, Michael James
This model successfully predicts human judgments from previous studies better than models of discrete causal inference, and outperforms several other plausible models of causal induction with continuous causes in accounting for people's inferences in a new experiment.
no code implementations • NeurIPS 2010 • Joseph L. Austerweil, Thomas L. Griffiths
Identifying the features of objects becomes a challenge when those features can change in their appearance.
no code implementations • NeurIPS 2009 • Anne Hsu, Thomas L. Griffiths
The former analyses assume that learners seek to identify grammatical sentences in a way that is robust to the distribution from which the sentences are generated, analogous to discriminative approaches in machine learning.
no code implementations • NeurIPS 2009 • Lei Shi, Thomas L. Griffiths
Here we propose a simple mechanism for Bayesian inference which involves averaging over a few feature detection neurons which fire at a rate determined by their similarity to a sensory stimulus.
no code implementations • NeurIPS 2009 • Kurt Miller, Michael. I. Jordan, Thomas L. Griffiths
As the availability and importance of relational data -- such as the friendships summarized on a social networking website -- increases, it becomes increasingly important to have good models for such data.
no code implementations • NeurIPS 2008 • Thomas L. Griffiths, Joseph L. Austerweil
Almost all successful machine learning algorithms and cognitive models require powerful representations capturing the features that are relevant to a particular problem.
no code implementations • NeurIPS 2008 • Thomas L. Griffiths, Chris Lucas, Joseph Williams, Michael L. Kalish
Accounts of how people learn functional relationships between continuous variables have tended to focus on two possibilities: that people are estimating explicit functions, or that they are simply performing associative learning supported by similarity.
no code implementations • NeurIPS 2008 • Jing Xu, Thomas L. Griffiths
Many human interactions involve pieces of information being passed from one person to another, raising the question of how this process of information transmission is affected by the capacities of the agents involved.
no code implementations • NeurIPS 2008 • Roger P. Levy, Florencia Reali, Thomas L. Griffiths
Language comprehension in humans is significantly constrained by memory, yet rapid, highly incremental, and capable of utilizing a wide range of contextual information to resolve ambiguity and form expectations about future input.
no code implementations • NeurIPS 2008 • Christopher G. Lucas, Thomas L. Griffiths, Fei Xu, Christine Fawcett
Young children demonstrate the ability to make inferences about the preferences of other agents based on their choices.
no code implementations • NeurIPS 2007 • Alexandre Bouchard-Côté, Percy S. Liang, Dan Klein, Thomas L. Griffiths
We present a probabilistic approach to language change in which word forms are represented by phoneme sequences that undergo stochastic edits along the branches of a phylogenetic tree.