no code implementations • 21 Aug 2024 • Lance Ying, Tan Zhi-Xuan, Lionel Wong, Vikash Mansinghka, Joshua B. Tenenbaum

How do people understand and evaluate claims about others' beliefs, even though these beliefs cannot be directly observed?

no code implementations • 23 Jul 2024 • Tan Zhi-Xuan, Gloria Kang, Vikash Mansinghka, Joshua B. Tenenbaum

The space of human goals is tremendously vast; and yet, from just a few moments of watching a scene or reading a story, we seem to spontaneously infer a range of plausible motivations for the people and characters involved.

no code implementations • 15 Mar 2024 • Aidan Curtis, George Matheos, Nishad Gothoskar, Vikash Mansinghka, Joshua Tenenbaum, Tomás Lozano-Pérez, Leslie Pack Kaelbling

We propose a strategy for TAMP with Uncertainty and Risk Awareness (TAMPURA) that is capable of efficiently solving long-horizon planning problems with initial-state and action outcome uncertainty, including problems that require information gathering and avoiding undesirable and irreversible outcomes.

1 code implementation • 27 Feb 2024 • Tan Zhi-Xuan, Lance Ying, Vikash Mansinghka, Joshua B. Tenenbaum

Our agent assists a human by modeling them as a cooperative planner who communicates joint plans to the assistant, then performs multimodal Bayesian inference over the human's goal from actions and language, using large language models (LLMs) to evaluate the likelihood of an instruction given a hypothesized plan.

no code implementations • 16 Feb 2024 • Lance Ying, Tan Zhi-Xuan, Lionel Wong, Vikash Mansinghka, Joshua Tenenbaum

In this paper, we take a step towards an answer by grounding the semantics of belief statements in a Bayesian theory-of-mind: By modeling how humans jointly infer coherent sets of goals, beliefs, and plans that explain an agent's actions, then evaluating statements about the agent's beliefs against these inferences via epistemic logic, our framework provides a conceptual role semantics for belief, explaining the gradedness and compositionality of human belief attributions, as well as their intimate connection with goals and plans.

no code implementations • 28 Jun 2023 • Lance Ying, Tan Zhi-Xuan, Vikash Mansinghka, Joshua B. Tenenbaum

When humans cooperate, they frequently coordinate their activity through both verbal communication and non-verbal actions, using this information to infer a shared goal and plan.

no code implementations • 28 Oct 2021 • Nicholas Roy, Ingmar Posner, Tim Barfoot, Philippe Beaudoin, Yoshua Bengio, Jeannette Bohg, Oliver Brock, Isabelle Depatie, Dieter Fox, Dan Koditschek, Tomas Lozano-Perez, Vikash Mansinghka, Christopher Pal, Blake Richards, Dorsa Sadigh, Stefan Schaal, Gaurav Sukhatme, Denis Therien, Marc Toussaint, Michiel Van de Panne

Machine learning has long since become a keystone technology, accelerating science and applications in a broad range of domains.

no code implementations • 23 Feb 2021 • Sam Witty, David Jensen, Vikash Mansinghka

This paper introduces simulation-based identifiability (SBI), a procedure for testing the identifiability of queries in Bayesian causal inference approaches that are implemented as probabilistic programs.

no code implementations • NeurIPS 2020 • Tan Zhi-Xuan, Jordyn Mann, Tom Silver, Josh Tenenbaum, Vikash Mansinghka

These models are specified as probabilistic programs, allowing us to represent and perform efficient Bayesian inference over an agent's goals and internal planning processes.

no code implementations • pproximateinference AABI Symposium 2021 • George Matheos, Alexander K. Lew, Matin Ghavamizadeh, Stuart Russell, Marco Cusumano-Towner, Vikash Mansinghka

Open-universe probabilistic models enable Bayesian inference about how many objects underlie data, and how they are related.

no code implementations • ICML 2020 • Sam Witty, Kenta Takatsu, David Jensen, Vikash Mansinghka

Latent confounders---unobserved variables that influence both treatment and outcome---can bias estimates of causal effects.

no code implementations • 26 Jun 2020 • Span Spanbauer, Cameron Freer, Vikash Mansinghka

We introduce deep involutive generative models, a new architecture for deep generative modeling, and use them to define Involutive Neural MCMC, a new approach to fast neural MCMC.

no code implementations • 30 Oct 2019 • Sam Witty, Alexander Lew, David Jensen, Vikash Mansinghka

This approach makes it straightforward to incorporate data from atomic interventions, as well as shift interventions, variance-scaling interventions, and other interventions that modify causal structure.

no code implementations • 22 May 2019 • Javier Felip, Nilesh Ahuja, David Gómez-Gutiérrez, Omesh Tickoo, Vikash Mansinghka

The underlying generative models are built from realistic simulation software, wrapped in a Bayesian error model for the gap between simulation outputs and real data.

1 code implementation • 4 Apr 2017 • Feras Saad, Leonardo Casarsa, Vikash Mansinghka

We found that human evaluators often prefer the results from probabilistic search to results from a standard baseline.

no code implementations • 21 Nov 2016 • Ulrich Schaechtle, Feras Saad, Alexey Radul, Vikash Mansinghka

There is a widespread need for techniques that can discover structure from time series data.

1 code implementation • 5 Nov 2016 • Feras Saad, Vikash Mansinghka

Datasets with hundreds of variables and many missing values are commonplace.

1 code implementation • 18 Aug 2016 • Feras Saad, Vikash Mansinghka

This paper introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques.

no code implementations • 15 Dec 2015 • Vikash Mansinghka, Richard Tibbetts, Jay Baxter, Pat Shafto, Baxter Eaves

Is it possible to make statistical inference broadly accessible to non-statisticians without sacrificing mathematical rigor or inference quality?

1 code implementation • 3 Dec 2015 • Vikash Mansinghka, Patrick Shafto, Eric Jonas, Cap Petschulat, Max Gasner, Joshua B. Tenenbaum

CrossCat infers multiple non-overlapping views of the data, each consisting of a subset of the variables, and uses a separate nonparametric mixture to model each view.

no code implementations • 3 Jul 2015 • Frank Wood, Jan Willem van de Meent, Vikash Mansinghka

We introduce and demonstrate a new approach to inference in expressive probabilistic programming languages based on particle Markov chain Monte Carlo.

no code implementations • CVPR 2015 • Tejas D. Kulkarni, Pushmeet Kohli, Joshua B. Tenenbaum, Vikash Mansinghka

Recent progress on probabilistic modeling and statistical learning, coupled with the availability of large training datasets, has led to remarkable progress in computer vision.

no code implementations • 31 May 2015 • Ardavan Saeedi, Vlad Firoiu, Vikash Mansinghka

Models of complex systems are often formalized as sequential software simulators: computationally intensive programs that iteratively build up probable system configurations given parameters and initial conditions.

no code implementations • 27 Jan 2015 • Jan-Willem van de Meent, Hongseok Yang, Vikash Mansinghka, Frank Wood

Particle Markov chain Monte Carlo techniques rank among current state-of-the-art methods for probabilistic program inference.

no code implementations • 6 Nov 2014 • Yutian Chen, Vikash Mansinghka, Zoubin Ghahramani

Probabilistic programming languages can simplify the development of machine learning techniques, but only if inference is sufficiently scalable.

no code implementations • 1 Apr 2014 • Vikash Mansinghka, Daniel Selsam, Yura Perov

Like Church, probabilistic models and inference problems in Venture are specified via a Turing-complete, higher-order probabilistic language descended from Lisp.

no code implementations • 24 Feb 2014 • Ardavan Saeedi, Tejas D. Kulkarni, Vikash Mansinghka, Samuel Gershman

Like Monte Carlo, DPVI can handle multiple modes, and yields exact results in a well-defined limit.

no code implementations • 20 Feb 2014 • Vikash Mansinghka, Eric Jonas

Here we show how to build fast Bayesian computing machines using intentionally stochastic, digital parts, narrowing this efficiency gap by multiple orders of magnitude.

no code implementations • 13 Jun 2012 • Noah Goodman, Vikash Mansinghka, Daniel M. Roy, Keith Bonawitz, Joshua B. Tenenbaum

We introduce Church, a universal language for describing stochastic generative processes.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.