Search Results for author: Joshua Tenenbaum

Found 27 papers, 11 papers with code

Grounding Language about Belief in a Bayesian Theory-of-Mind

no code implementations16 Feb 2024 Lance Ying, Tan Zhi-Xuan, Lionel Wong, Vikash Mansinghka, Joshua Tenenbaum

In this paper, we take a step towards an answer by grounding the semantics of belief statements in a Bayesian theory-of-mind: By modeling how humans jointly infer coherent sets of goals, beliefs, and plans that explain an agent's actions, then evaluating statements about the agent's beliefs against these inferences via epistemic logic, our framework provides a conceptual role semantics for belief, explaining the gradedness and compositionality of human belief attributions, as well as their intimate connection with goals and plans.

Attribute

How does the primate brain combine generative and discriminative computations in vision?

no code implementations11 Jan 2024 Benjamin Peters, James J. DiCarlo, Todd Gureckis, Ralf Haefner, Leyla Isik, Joshua Tenenbaum, Talia Konkle, Thomas Naselaris, Kimberly Stachenfeld, Zenna Tavares, Doris Tsao, Ilker Yildirim, Nikolaus Kriegeskorte

The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes giving rise to it.

Planning with Sequence Models through Iterative Energy Minimization

no code implementations28 Mar 2023 Hongyi Chen, Yilun Du, Yiye Chen, Joshua Tenenbaum, Patricio A. Vela

In this paper, we suggest an approach towards integrating planning with sequence models based on the idea of iterative energy minimization, and illustrate how such a procedure leads to improved RL performance across different tasks.

Language Modelling Reinforcement Learning (RL)

Is Conditional Generative Modeling all you need for Decision-Making?

no code implementations28 Nov 2022 Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, Pulkit Agrawal

We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables: constraints and skills.

Decision Making Offline RL +1

Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning

no code implementations24 Nov 2022 Aviv Netanyahu, Tianmin Shu, Joshua Tenenbaum, Pulkit Agrawal

To address this, we propose a reward learning approach, Graph-based Equivalence Mappings (GEM), that can discover spatial goal representations that are aligned with the intended goal specification, enabling successful generalization in unseen environments.

Imitation Learning

Designing Perceptual Puzzles by Differentiating Probabilistic Programs

no code implementations26 Apr 2022 Kartik Chandra, Tzu-Mao Li, Joshua Tenenbaum, Jonathan Ragan-Kelley

We design new visual illusions by finding "adversarial examples" for principled models of human perception -- specifically, for probabilistic models, which treat vision as Bayesian inference.

Color Constancy Probabilistic Programming

Predicate Invention for Bilevel Planning

1 code implementation17 Mar 2022 Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum

Our key idea is to learn predicates by optimizing a surrogate objective that is tractable but faithful to our real efficient-planning objective.

Building 3D Generative Models from Minimal Data

no code implementations4 Mar 2022 Skylar Sutherland, Bernhard Egger, Joshua Tenenbaum

We extend our model to a preliminary unsupervised learning framework that enables the learning of the distribution of 3D faces using one 3D template and a small number of 2D images.

Face Recognition Gaussian Processes

Noether Networks: Meta-Learning Useful Conserved Quantities

no code implementations NeurIPS 2021 Ferran Alet, Dylan Doblar, Allan Zhou, Joshua Tenenbaum, Kenji Kawaguchi, Chelsea Finn

Progress in machine learning (ML) stems from a combination of data availability, computational resources, and an appropriate encoding of inductive biases.

Meta-Learning Translation

The neural architecture of language: Integrative modeling converges on predictive processing

1 code implementation Proceedings of the National Academy of Sciences 2021 Martin Schrimpf, Idan Blank, Greta Tuckute, Carina Kauf, Eghbal Hosseini, Nancy Kanwisher, Joshua Tenenbaum, Evelina Fedorenko

The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models.

Language Modelling Probing Language Models +1

Identity-Expression Ambiguity in 3D Morphable Face Models

no code implementations29 Sep 2021 Bernhard Egger, Skylar Sutherland, Safa C. Medin, Joshua Tenenbaum

We demonstrate that non-orthogonality of the variation in identity and expression can cause identity-expression ambiguity in 3D Morphable Models, and that in practice expression and identity are far from orthogonal and can explain each other surprisingly well.

3D Reconstruction 3D Shape Generation +1

Probabilistic Programming Bots in Intuitive Physics Game Play

no code implementations5 Apr 2021 Fahad Alhasoun, Sarah Alnegheimish, Joshua Tenenbaum

Recent findings suggest that humans deploy cognitive mechanism of physics simulation engines to simulate the physics of objects.

Probabilistic Programming

Learning Symbolic Operators for Task and Motion Planning

1 code implementation28 Feb 2021 Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We then propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.

Motion Planning Operator learning +2

Modular Object-Oriented Games: A Task Framework for Reinforcement Learning, Psychology, and Neuroscience

1 code implementation25 Feb 2021 Nicholas Watters, Joshua Tenenbaum, Mehrdad Jazayeri

In recent years, trends towards studying simulated games have gained momentum in the fields of artificial intelligence, cognitive science, psychology, and neuroscience.

Reinforcement Learning (RL)

Improved Contrastive Divergence Training of Energy Based Models

4 code implementations2 Dec 2020 Yilun Du, Shuang Li, Joshua Tenenbaum, Igor Mordatch

Contrastive divergence is a popular method of training energy-based models, but is known to have difficulties with training stability.

Data Augmentation Image Generation +1

Building 3D Morphable Models from a Single Scan

1 code implementation24 Nov 2020 Skylar Sutherland, Bernhard Egger, Joshua Tenenbaum

We propose a method for constructing generative models of 3D objects from a single 3D mesh.

Face Recognition Gaussian Processes +1

A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects

no code implementations16 Nov 2020 Anthony Simeonov, Yilun Du, Beomjoon Kim, Francois R. Hogan, Joshua Tenenbaum, Pulkit Agrawal, Alberto Rodriguez

We present a framework for solving long-horizon planning problems involving manipulation of rigid objects that operates directly from a point-cloud observation, i. e. without prior object models.

Graph Attention Motion Planning +2

Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks

1 code implementation11 Sep 2020 Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

We conclude that learning to predict a sufficient set of objects for a planning problem is a simple, powerful, and general mechanism for planning in large instances.

Motion Planning Task and Motion Planning

A Morphable Face Albedo Model

1 code implementation CVPR 2020 William A. P. Smith, Alassane Seck, Hannah Dee, Bernard Tiddeman, Joshua Tenenbaum, Bernhard Egger

In this paper, we bring together two divergent strands of research: photometric face capture and statistical 3D face appearance modelling.

Art Analysis Face Model +1

GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling

1 code implementation22 Jan 2020 Rohan Chitnis, Tom Silver, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez

We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards.

Decision Making Efficient Exploration +3

Explaining intuitive difficulty judgments by modeling physical effort and risk

no code implementations11 May 2019 Ilker Yildirim, Basil Saeed, Grace Bennett-Pierre, Tobias Gerstenberg, Joshua Tenenbaum, Hyowon Gweon

The ability to estimate task difficulty is critical for many real-world decisions such as setting appropriate goals for ourselves or appreciating others' accomplishments.

Learning to Infer Program Sketches

1 code implementation17 Feb 2019 Maxwell Nye, Luke Hewitt, Joshua Tenenbaum, Armando Solar-Lezama

Our goal is to build systems which write code automatically from the kinds of specifications humans can most easily provide, such as examples and natural language instruction.

Memorization Program Synthesis

Measuring and modeling the perception of natural and unconstrained gaze in humans and machines

no code implementations29 Nov 2016 Daniel Harari, Tao Gao, Nancy Kanwisher, Joshua Tenenbaum, Shimon Ullman

How accurate are humans in determining the gaze direction of others in lifelike scenes, when they can move their heads and eyes freely, and what are the sources of information for the underlying perceptual processes?

When Computer Vision Gazes at Cognition

1 code implementation8 Dec 2014 Tao Gao, Daniel Harari, Joshua Tenenbaum, Shimon Ullman

(1) Human accuracy of discriminating targets 8{\deg}-10{\deg} of visual angle apart is around 40% in a free looking gaze task; (2) The ability to interpret gaze of different lookers vary dramatically; (3) This variance can be captured by the computational model; (4) Human outperforms the current model significantly.

Task 2

Cannot find the paper you are looking for? You can Submit a new open access paper.