Search Results for author: Tuan Anh Le

Found 23 papers, 9 papers with code

Robust Inverse Graphics via Probabilistic Inference

no code implementations2 Feb 2024 Tuan Anh Le, Pavel Sountsov, Matthew D. Hoffman, Ben Lee, Brian Patton, Rif A. Saurous

How do we infer a 3D scene from a single image in the presence of corruptions like rain, snow or fog?

Training Chain-of-Thought via Latent-Variable Inference

no code implementations NeurIPS 2023 Du Phan, Matthew D. Hoffman, David Dohan, Sholto Douglas, Tuan Anh Le, Aaron Parisi, Pavel Sountsov, Charles Sutton, Sharad Vikram, Rif A. Saurous

Large language models (LLMs) solve problems more accurately and interpretably when instructed to work out the answer step by step using a ``chain-of-thought'' (CoT) prompt.


Neural Amortized Inference for Nested Multi-agent Reasoning

1 code implementation21 Aug 2023 Kunal Jha, Tuan Anh Le, Chuanyang Jin, Yen-Ling Kuo, Joshua B. Tenenbaum, Tianmin Shu

Multi-agent interactions, such as communication, teaching, and bluffing, often rely on higher-order social inference, i. e., understanding how others infer oneself.

Drawing out of Distribution with Neuro-Symbolic Generative Models

no code implementations3 Jun 2022 Yichao Liang, Joshua B. Tenenbaum, Tuan Anh Le, N. Siddharth

We then adopt a subset of the Omniglot challenge tasks, and evaluate its ability to generate new exemplars (both unconditionally and conditionally), and perform one-shot classification, showing that DooD matches the state of the art.

Hybrid Memoised Wake-Sleep: Approximate Inference at the Discrete-Continuous Interface

no code implementations ICLR 2022 Tuan Anh Le, Katherine M. Collins, Luke Hewitt, Kevin Ellis, N. Siddharth, Samuel J. Gershman, Joshua B. Tenenbaum

We build on a recent approach, Memoised Wake-Sleep (MWS), which alleviates part of the problem by memoising discrete variables, and extend it to allow for a principled and effective way to handle continuous variables by learning a separate recognition model used for importance-sampling based approximate inference and marginalization.

Scene Understanding Time Series +1

Learning to learn generative programs with Memoised Wake-Sleep

no code implementations6 Jul 2020 Luke B. Hewitt, Tuan Anh Le, Joshua B. Tenenbaum

We study a class of neuro-symbolic generative models in which neural networks are used both for inference and as priors over symbolic, data-generating programs.

Explainable Models Few-Shot Learning +1

Semi-supervised Sequential Generative Models

no code implementations30 Jun 2020 Michael Teng, Tuan Anh Le, Adam Scibior, Frank Wood

We introduce a novel objective for training deep generative time-series models with discrete latent variables for which supervision is only sparsely available.

Time Series Time Series Analysis

Amortized Population Gibbs Samplers with Neural Sufficient Statistics

1 code implementation ICML 2020 Hao Wu, Heiko Zimmermann, Eli Sennesh, Tuan Anh Le, Jan-Willem van de Meent

We develop amortized population Gibbs (APG) samplers, a class of scalable methods that frames structured variational inference as adaptive importance sampling.

Variational Inference

The Thermodynamic Variational Objective

1 code implementation NeurIPS 2019 Vaden Masrani, Tuan Anh Le, Frank Wood

We introduce the thermodynamic variational objective (TVO) for learning in both continuous and discrete deep generative models.

Variational Inference

Revisiting Reweighted Wake-Sleep

no code implementations ICLR 2019 Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood

Discrete latent-variable models, while applicable in a variety of settings, can often be difficult to learn.

Imitation Learning of Factored Multi-agent Reactive Models

no code implementations12 Mar 2019 Michael Teng, Tuan Anh Le, Adam Scibior, Frank Wood

We apply recent advances in deep generative modeling to the task of imitation learning from biological agents.

Imitation Learning

Deep Variational Reinforcement Learning for POMDPs

1 code implementation ICML 2018 Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, Shimon Whiteson

Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown.

Decision Making Inductive Bias +2

Revisiting Reweighted Wake-Sleep for Models with Stochastic Control Flow

1 code implementation ICLR 2019 Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood

Stochastic control-flow models (SCFMs) are a class of generative models that involve branching on choices from discrete random variables.

Tighter Variational Bounds are Not Necessarily Better

3 code implementations ICML 2018 Tom Rainforth, Adam R. Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, Yee Whye Teh

We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can be detrimental to the process of learning an inference network by reducing the signal-to-noise ratio of the gradient estimator.

Bayesian Optimization for Probabilistic Programs

2 code implementations NeurIPS 2016 Tom Rainforth, Tuan Anh Le, Jan-Willem van de Meent, Michael A. Osborne, Frank Wood

We present the first general purpose framework for marginal maximum a posteriori estimation of probabilistic program variables.

Bayesian Optimization

Auto-Encoding Sequential Monte Carlo

1 code implementation ICLR 2018 Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, Frank Wood

We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models.

Using Synthetic Data to Train Neural Networks is Model-Based Reasoning

no code implementations2 Mar 2017 Tuan Anh Le, Atilim Gunes Baydin, Robert Zinkov, Frank Wood

We draw a formal connection between using synthetic training data to optimize neural network parameters and approximate, Bayesian, model-based reasoning.

Inference Compilation and Universal Probabilistic Programming

4 code implementations31 Oct 2016 Tuan Anh Le, Atilim Gunes Baydin, Frank Wood

We introduce a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages, establishing a framework that combines the strengths of probabilistic programming and deep learning methods.

Probabilistic Programming

Data-driven Sequential Monte Carlo in Probabilistic Programming

no code implementations14 Dec 2015 Yura N. Perov, Tuan Anh Le, Frank Wood

Most of Markov Chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) algorithms in existing probabilistic programming systems suboptimally use only model priors as proposal distributions.

Probabilistic Programming

Cannot find the paper you are looking for? You can Submit a new open access paper.