1 code implementation • 13 Jul 2023 • Feras A. Saad, Brian J. Patton, Matthew D. Hoffman, Rif A. Saurous, Vikash K. Mansinghka
This paper presents a new approach to automatically discovering accurate models of complex time series data.
1 code implementation • 22 Jun 2023 • Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language.
1 code implementation • 13 Jun 2023 • Gaurav Arya, Ruben Seyer, Frank Schäfer, Kartik Chandra, Alexander K. Lew, Mathieu Huot, Vikash K. Mansinghka, Jonathan Ragan-Kelley, Christopher Rackauckas, Moritz Schauer
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers, allowing us to differentiate through probabilistic inference, even if the model has discrete components within it.
2 code implementations • 5 Jun 2023 • Alexander K. Lew, Tan Zhi-Xuan, Gabriel Grand, Vikash K. Mansinghka
Even after fine-tuning and reinforcement learning, large language models (LLMs) can be difficult, if not impossible, to control reliably with prompts alone.
no code implementations • 21 Feb 2023 • Mathieu Huot, Alexander K. Lew, Vikash K. Mansinghka, Sam Staton
We introduce a new setting, the category of $\omega$PAP spaces, for reasoning denotationally about expressive differentiable and probabilistic programming languages.
1 code implementation • ICCV 2023 • Guangyao Zhou, Nishad Gothoskar, Lirui Wang, Joshua B. Tenenbaum, Dan Gutfreund, Miguel Lázaro-Gredilla, Dileep George, Vikash K. Mansinghka
In this paper, we introduce probabilistic modeling to the inverse graphics framework to quantify uncertainty and achieve robustness in 6D pose estimation tasks.
Ranked #1 on on YCB-Video
no code implementations • 27 Oct 2022 • Matthew D. Hoffman, Tuan Anh Le, Pavel Sountsov, Christopher Suter, Ben Lee, Vikash K. Mansinghka, Rif A. Saurous
The problem of inferring object shape from a single 2D image is underconstrained.
no code implementations • 5 Aug 2022 • Tan Zhi-Xuan, Joshua B. Tenenbaum, Vikash K. Mansinghka
Domain-general model-based planners often derive their generality by constructing search heuristics through the relaxation or abstraction of symbolic world models.
no code implementations • 4 Aug 2022 • Tan Zhi-Xuan, Nishad Gothoskar, Falk Pollok, Dan Gutfreund, Joshua B. Tenenbaum, Vikash K. Mansinghka
To facilitate the development of new models to bridge the gap between machine and human social intelligence, the recently proposed Baby Intuitions Benchmark (arXiv:2102. 11938) provides a suite of tasks designed to evaluate commonsense reasoning about agents' goals and actions that even young infants exhibit.
1 code implementation • 5 Mar 2022 • Alexander K. Lew, Marco Cusumano-Towner, Vikash K. Mansinghka
A key design constraint when implementing Monte Carlo and variational inference algorithms is that it must be possible to cheaply and exactly evaluate the marginal densities of proposal distributions and variational families.
no code implementations • 24 Feb 2022 • Feras A. Saad, Marco Cusumano-Towner, Vikash K. Mansinghka
Estimating information-theoretic quantities such as entropy and mutual information is central to many problems in statistics and machine learning, but challenging in high dimensions.
1 code implementation • NeurIPS 2021 • Nishad Gothoskar, Marco Cusumano-Towner, Ben Zinberg, Matin Ghavamizadeh, Falk Pollok, Austin Garrett, Joshua B. Tenenbaum, Dan Gutfreund, Vikash K. Mansinghka
We present 3DP3, a framework for inverse graphics that uses inference in a structured generative model of objects, scenes, and images.
1 code implementation • 16 Aug 2021 • Feras A. Saad, Vikash K. Mansinghka
This paper describes the hierarchical infinite relational model (HIRM), a new probabilistic generative model for noisy, sparse, and heterogeneous relational data.
no code implementations • 24 Jun 2021 • Arwa Alanqary, Gloria Z. Lin, Joie Le, Tan Zhi-Xuan, Vikash K. Mansinghka, Joshua B. Tenenbaum
Here, we extend the Bayesian Theory of Mind framework to model boundedly rational agents who may have mistaken goals, plans, and actions.
1 code implementation • 7 Oct 2020 • Feras A. Saad, Martin C. Rinard, Vikash K. Mansinghka
We present the Sum-Product Probabilistic Language (SPPL), a new probabilistic programming language that automatically delivers exact solutions to a broad range of probabilistic inference queries.
1 code implementation • 23 Jul 2020 • Alexander K. Lew, Monica Agrawal, David Sontag, Vikash K. Mansinghka
Data cleaning is naturally framed as probabilistic inference in a generative model of ground-truth data and likely errors, but the diversity of real-world error patterns and the hardness of inference make Bayesian approaches difficult to automate.
2 code implementations • 20 Jul 2020 • Marco Cusumano-Towner, Alexander K. Lew, Vikash K. Mansinghka
Involutive MCMC is a unifying mathematical construction for MCMC kernels that generalizes many classic and state-of-the-art MCMC algorithms, from reversible jump MCMC to kernels based on deep neural networks.
Computation
1 code implementation • 13 Jun 2020 • Tan Zhi-Xuan, Jordyn L. Mann, Tom Silver, Joshua B. Tenenbaum, Vikash K. Mansinghka
These models are specified as probabilistic programs, allowing us to represent and perform efficient Bayesian inference over an agent's goals and internal planning processes.
no code implementations • 14 Jul 2019 • Feras A. Saad, Marco F. Cusumano-Towner, Ulrich Schaechtle, Martin C. Rinard, Vikash K. Mansinghka
These techniques work with probabilistic domain-specific data modeling languages that capture key properties of a broad class of data generating processes, using Bayesian inference to synthesize probabilistic programs in these modeling languages given observed data.
no code implementations • 26 Feb 2019 • Feras A. Saad, Cameron E. Freer, Nathanael L. Ackerman, Vikash K. Mansinghka
Unlike most existing test statistics, the proposed test statistic is distribution-free and its exact (non-asymptotic) sampling distribution is known in closed form.
no code implementations • 11 Jan 2018 • Marco F. Cusumano-Towner, Vikash K. Mansinghka
Monte Carlo inference has asymptotic guarantees, but can be slow when using generic proposals.
1 code implementation • 18 Oct 2017 • Feras A. Saad, Vikash K. Mansinghka
We apply the technique to challenging forecasting and imputation tasks using seasonal flu data from the US Center for Disease Control and Prevention, demonstrating superior forecasting accuracy and competitive imputation accuracy as compared to multiple widely used baselines.
no code implementations • NeurIPS 2017 • Marco F. Cusumano-Towner, Vikash K. Mansinghka
This paper introduces the auxiliary inference divergence estimator (AIDE), an algorithm for measuring the accuracy of approximate inference algorithms.
1 code implementation • 17 Apr 2017 • Marco F. Cusumano-Towner, Alexey Radul, David Wingate, Vikash K. Mansinghka
Intelligent systems sometimes need to infer the probable goals of people, cars, and robots, based on partial observations of their motion.
no code implementations • 14 Dec 2016 • Marco F. Cusumano-Towner, Vikash K. Mansinghka
This paper introduces the probabilistic module interface, which allows encapsulation of complex probabilistic models with latent variables alongside custom stochastic approximate inference machinery, and provides a platform-agnostic abstraction barrier separating the model internals from the host probabilistic inference system.
no code implementations • 7 Dec 2016 • Marco F. Cusumano-Towner, Vikash K. Mansinghka
A key limitation of sampling algorithms for approximate inference is that it is difficult to quantify their approximation error.
no code implementations • NeurIPS 2016 • Feras Saad, Vikash K. Mansinghka
Probabilistic techniques are central to data analysis, but different approaches can be challenging to apply, combine, and compare.
no code implementations • 31 May 2016 • Marco F. Cusumano-Towner, Vikash K. Mansinghka
This paper introduces a new technique for quantifying the approximation error of a broad class of probabilistic inference programs, including ones based on both variational and Monte Carlo approaches.
no code implementations • 17 Dec 2015 • Ulrich Schaechtle, Ben Zinberg, Alexey Radul, Kostas Stathis, Vikash K. Mansinghka
Gaussian Processes (GPs) are widely used tools in statistics, machine learning, robotics, computer vision, and scientific computation.
no code implementations • 1 Mar 2015 • Jonathan H. Huggins, Karthik Narasimhan, Ardavan Saeedi, Vikash K. Mansinghka
We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models.
no code implementations • 4 Jul 2014 • Tejas D. Kulkarni, Vikash K. Mansinghka, Pushmeet Kohli, Joshua B. Tenenbaum
We show that it is possible to solve challenging, real-world 3D vision problems by approximate inference in generative models for images based on rendering the outputs of probabilistic CAD (PCAD) programs.
no code implementations • NeurIPS 2013 • Vikash K. Mansinghka, Tejas D. Kulkarni, Yura N. Perov, Joshua B. Tenenbaum
The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement.
no code implementations • 8 Apr 2013 • Dan Lovell, Jonathan Malmaud, Ryan P. Adams, Vikash K. Mansinghka
Applied to mixture modeling, our approach enables the Dirichlet process to simultaneously learn clusters that describe the data and superclusters that define the granularity of parallelization.