no code implementations • 15 Mar 2024 • Aidan Curtis, George Matheos, Nishad Gothoskar, Vikash Mansinghka, Joshua Tenenbaum, Tomás Lozano-Pérez, Leslie Pack Kaelbling
We propose a strategy for TAMP with Uncertainty and Risk Awareness (TAMPURA) that is capable of efficiently solving long-horizon planning problems with initial-state and action outcome uncertainty, including problems that require information gathering and avoiding undesirable and irreversible outcomes.
1 code implementation • 8 Mar 2024 • Kartik Chandra, Tzu-Mao Li, Rachit Nigam, Joshua Tenenbaum, Jonathan Ragan-Kelley
Often, a good explanation for a program's unexpected behavior is a bug in the programmer's code.
no code implementations • 16 Feb 2024 • Lance Ying, Tan Zhi-Xuan, Lionel Wong, Vikash Mansinghka, Joshua Tenenbaum
In this paper, we take a step towards an answer by grounding the semantics of belief statements in a Bayesian theory-of-mind: By modeling how humans jointly infer coherent sets of goals, beliefs, and plans that explain an agent's actions, then evaluating statements about the agent's beliefs against these inferences via epistemic logic, our framework provides a conceptual role semantics for belief, explaining the gradedness and compositionality of human belief attributions, as well as their intimate connection with goals and plans.
no code implementations • 11 Jan 2024 • Benjamin Peters, James J. DiCarlo, Todd Gureckis, Ralf Haefner, Leyla Isik, Joshua Tenenbaum, Talia Konkle, Thomas Naselaris, Kimberly Stachenfeld, Zenna Tavares, Doris Tsao, Ilker Yildirim, Nikolaus Kriegeskorte
The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes giving rise to it.
no code implementations • 28 Mar 2023 • Hongyi Chen, Yilun Du, Yiye Chen, Joshua Tenenbaum, Patricio A. Vela
In this paper, we suggest an approach towards integrating planning with sequence models based on the idea of iterative energy minimization, and illustrate how such a procedure leads to improved RL performance across different tasks.
no code implementations • 28 Nov 2022 • Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, Pulkit Agrawal
We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables: constraints and skills.
no code implementations • 24 Nov 2022 • Aviv Netanyahu, Tianmin Shu, Joshua Tenenbaum, Pulkit Agrawal
To address this, we propose a reward learning approach, Graph-based Equivalence Mappings (GEM), that can discover spatial goal representations that are aligned with the intended goal specification, enabling successful generalization in unseen environments.
no code implementations • 26 Apr 2022 • Kartik Chandra, Tzu-Mao Li, Joshua Tenenbaum, Jonathan Ragan-Kelley
We design new visual illusions by finding "adversarial examples" for principled models of human perception -- specifically, for probabilistic models, which treat vision as Bayesian inference.
1 code implementation • 17 Mar 2022 • Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum
Our key idea is to learn predicates by optimizing a surrogate objective that is tractable but faithful to our real efficient-planning objective.
no code implementations • 4 Mar 2022 • Skylar Sutherland, Bernhard Egger, Joshua Tenenbaum
We extend our model to a preliminary unsupervised learning framework that enables the learning of the distribution of 3D faces using one 3D template and a small number of 2D images.
no code implementations • 19 Jan 2022 • Edmond Awad, Sydney Levine, Andrea Loreggia, Nicholas Mattei, Iyad Rahwan, Francesca Rossi, Kartik Talamadupula, Joshua Tenenbaum, Max Kleiman-Weiner
We can invent novel rules on the fly.
no code implementations • NeurIPS 2021 • Ferran Alet, Dylan Doblar, Allan Zhou, Joshua Tenenbaum, Kenji Kawaguchi, Chelsea Finn
Progress in machine learning (ML) stems from a combination of data availability, computational resources, and an appropriate encoding of inductive biases.
1 code implementation • Proceedings of the National Academy of Sciences 2021 • Martin Schrimpf, Idan Blank, Greta Tuckute, Carina Kauf, Eghbal Hosseini, Nancy Kanwisher, Joshua Tenenbaum, Evelina Fedorenko
The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models.
no code implementations • 29 Sep 2021 • Bernhard Egger, Skylar Sutherland, Safa C. Medin, Joshua Tenenbaum
We demonstrate that non-orthogonality of the variation in identity and expression can cause identity-expression ambiguity in 3D Morphable Models, and that in practice expression and identity are far from orthogonal and can explain each other surprisingly well.
no code implementations • 5 Apr 2021 • Fahad Alhasoun, Sarah Alnegheimish, Joshua Tenenbaum
Recent findings suggest that humans deploy cognitive mechanism of physics simulation engines to simulate the physics of objects.
no code implementations • 12 Mar 2021 • Christian Bongiorno, Yulun Zhou, Marta Kryven, David Theurel, Alessandro Rizzo, Paolo Santi, Joshua Tenenbaum, Carlo Ratti
How do pedestrians choose their paths within city street networks?
1 code implementation • 28 Feb 2021 • Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez
We then propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.
1 code implementation • 25 Feb 2021 • Nicholas Watters, Joshua Tenenbaum, Mehrdad Jazayeri
In recent years, trends towards studying simulated games have gained momentum in the fields of artificial intelligence, cognitive science, psychology, and neuroscience.
4 code implementations • 2 Dec 2020 • Yilun Du, Shuang Li, Joshua Tenenbaum, Igor Mordatch
Contrastive divergence is a popular method of training energy-based models, but is known to have difficulties with training stability.
1 code implementation • 24 Nov 2020 • Skylar Sutherland, Bernhard Egger, Joshua Tenenbaum
We propose a method for constructing generative models of 3D objects from a single 3D mesh.
no code implementations • 16 Nov 2020 • Anthony Simeonov, Yilun Du, Beomjoon Kim, Francois R. Hogan, Joshua Tenenbaum, Pulkit Agrawal, Alberto Rodriguez
We present a framework for solving long-horizon planning problems involving manipulation of rigid objects that operates directly from a point-cloud observation, i. e. without prior object models.
1 code implementation • 11 Sep 2020 • Tom Silver, Rohan Chitnis, Aidan Curtis, Joshua Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling
We conclude that learning to predict a sufficient set of objects for a planning problem is a simple, powerful, and general mechanism for planning in large instances.
no code implementations • 24 Jul 2020 • Yilun Du, Kevin Smith, Tomer Ulman, Joshua Tenenbaum, Jiajun Wu
We study the problem of unsupervised physical object discovery.
1 code implementation • CVPR 2020 • William A. P. Smith, Alassane Seck, Hannah Dee, Bernard Tiddeman, Joshua Tenenbaum, Bernhard Egger
In this paper, we bring together two divergent strands of research: photometric face capture and statistical 3D face appearance modelling.
1 code implementation • 22 Jan 2020 • Rohan Chitnis, Tom Silver, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomas Lozano-Perez
We address the problem of efficient exploration for transition model learning in the relational model-based reinforcement learning setting without extrinsic goals or rewards.
no code implementations • 11 May 2019 • Ilker Yildirim, Basil Saeed, Grace Bennett-Pierre, Tobias Gerstenberg, Joshua Tenenbaum, Hyowon Gweon
The ability to estimate task difficulty is critical for many real-world decisions such as setting appropriate goals for ourselves or appreciating others' accomplishments.
1 code implementation • 17 Feb 2019 • Maxwell Nye, Luke Hewitt, Joshua Tenenbaum, Armando Solar-Lezama
Our goal is to build systems which write code automatically from the kinds of specifications humans can most easily provide, such as examples and natural language instruction.
no code implementations • 29 Nov 2016 • Daniel Harari, Tao Gao, Nancy Kanwisher, Joshua Tenenbaum, Shimon Ullman
How accurate are humans in determining the gaze direction of others in lifelike scenes, when they can move their heads and eyes freely, and what are the sources of information for the underlying perceptual processes?
1 code implementation • 8 Dec 2014 • Tao Gao, Daniel Harari, Joshua Tenenbaum, Shimon Ullman
(1) Human accuracy of discriminating targets 8{\deg}-10{\deg} of visual angle apart is around 40% in a free looking gaze task; (2) The ability to interpret gaze of different lookers vary dramatically; (3) This variance can be captured by the computational model; (4) Human outperforms the current model significantly.