Search Results for author: Dan Gutfreund

Found 16 papers, 3 papers with code

A Bayesian-Symbolic Approach to Reasoning and Learning in Intuitive Physics

no code implementations NeurIPS 2021 Kai Xu, Akash Srivastava, Dan Gutfreund, Felix Sosa, Tomer Ullman, Josh Tenenbaum, Charles Sutton

In this paper, we propose a Bayesian-symbolic framework (BSP) for physical reasoning and learning that is close to human-level sample-efficiency and accuracy.

Bayesian Inference bilevel optimization +1

AGENT: A Benchmark for Core Psychological Reasoning

no code implementations24 Feb 2021 Tianmin Shu, Abhishek Bhandwaldar, Chuang Gan, Kevin A. Smith, Shari Liu, Dan Gutfreund, Elizabeth Spelke, Joshua B. Tenenbaum, Tomer D. Ullman

For machine agents to successfully interact with humans in real-world settings, they will need to develop an understanding of human mental life.

Core Psychological Reasoning

A Bayesian-Symbolic Approach to Learning and Reasoning for Intuitive Physics

no code implementations1 Jan 2021 Kai Xu, Akash Srivastava, Dan Gutfreund, Felix Sosa, Tomer Ullman, Joshua B. Tenenbaum, Charles Sutton

As such, learning the laws is then reduced to symbolic regression and Bayesian inference methods are used to obtain the distribution of unobserved properties.

Bayesian Inference Common Sense Reasoning

Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling

no code implementations25 Oct 2020 Akash Srivastava, Yamini Bansal, Yukun Ding, Cole Hurwitz, Kai Xu, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund

Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the (aggregate) posterior to encourage statistical independence of the latent factors.

Representation Learning

CZ-GEM: A FRAMEWORK FOR DISENTANGLED REPRESENTATION LEARNING

no code implementations ICLR 2020 Akash Srivastava, Yamini Bansal, Yukun Ding, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund

In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate.

Representation Learning

ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models

no code implementations NeurIPS 2019 Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, Boris Katz

Although we focus on object recognition here, data with controls can be gathered at scale using automated tools throughout machine learning to generate datasets that exercise models in new ways thus providing valuable feedback to researchers.

Ranked #4 on Image Classification on ObjectNet (using extra training data)

Image Classification Object Recognition

Reasoning About Human-Object Interactions Through Dual Attention Networks

no code implementations ICCV 2019 Tete Xiao, Quanfu Fan, Dan Gutfreund, Mathew Monfort, Aude Oliva, Bolei Zhou

The model not only finds when an action is happening and which object is being manipulated, but also identifies which part of the object is being interacted with.

Human-Object Interaction Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.