Search Results for author: Dieterich Lawson

Found 14 papers, 7 papers with code

NAS-X: Neural Adaptive Smoothing via Twisting

no code implementations28 Aug 2023 Dieterich Lawson, Michael Li, Scott Linderman

We test NAS-X on discrete and continuous tasks and find that it substantially outperforms previous variational and RWS-based methods in inference and parameter recovery.

SIXO: Smoothing Inference with Twisted Objectives

1 code implementation13 Jun 2022 Dieterich Lawson, Allan Raventós, Andrew Warrington, Scott Linderman

Sequential Monte Carlo (SMC) is an inference algorithm for state space models that approximates the posterior by sampling from a sequence of target distributions.

Density Ratio Estimation

Evaluating Predictive Distributions: Does Bayesian Deep Learning Work?

no code implementations29 Sep 2021 Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Xiuyuan Lu, Morteza Ibrahimi, Vikranth Dwaracherla, Dieterich Lawson, Brendan O'Donoghue, Botao Hao, Benjamin Van Roy

This paper introduces \textit{The Neural Testbed}, which provides tools for the systematic evaluation of agents that generate such predictions.

Energy-Inspired Models: Learning with Sampler-Induced Distributions

1 code implementation NeurIPS 2019 Dieterich Lawson, George Tucker, Bo Dai, Rajesh Ranganath

Motivated by this, we consider the sampler-induced distribution as the model of interest and maximize the likelihood of this model.

Variational Inference

Revisiting Auxiliary Latent Variables in Generative Models

no code implementations ICLR Workshop DeepGenStruct 2019 Dieterich Lawson, George Tucker, Bo Dai, Rajesh Ranganath

The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model.

Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives

3 code implementations ICLR 2019 George Tucker, Dieterich Lawson, Shixiang Gu, Chris J. Maddison

Burda et al. (2015) introduced a multi-sample variational bound, IWAE, that is at least as tight as the standard variational lower bound and becomes increasingly tight as the number of samples increases.

Variational Inference

An online sequence-to-sequence model for noisy speech recognition

no code implementations16 Jun 2017 Chung-Cheng Chiu, Dieterich Lawson, Yuping Luo, George Tucker, Kevin Swersky, Ilya Sutskever, Navdeep Jaitly

This is because the models require that the entirety of the input sequence be available at the beginning of inference, an assumption that is not valid for instantaneous speech recognition.

Noisy Speech Recognition speech-recognition

Filtering Variational Objectives

3 code implementations NeurIPS 2017 Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, andriy mnih, Arnaud Doucet, Yee Whye Teh

When used as a surrogate objective for maximum likelihood estimation in latent variable models, the evidence lower bound (ELBO) produces state-of-the-art results.

Learning Hard Alignments with Variational Inference

no code implementations16 May 2017 Dieterich Lawson, Chung-Cheng Chiu, George Tucker, Colin Raffel, Kevin Swersky, Navdeep Jaitly

There has recently been significant interest in hard attention models for tasks such as object recognition, visual captioning and speech recognition.

Hard Attention Image Captioning +5

Changing Model Behavior at Test-Time Using Reinforcement Learning

no code implementations24 Feb 2017 Augustus Odena, Dieterich Lawson, Christopher Olah

Machine learning models are often used at test-time subject to constraints and trade-offs not present at training-time.

BIG-bench Machine Learning reinforcement-learning +2

Training a Subsampling Mechanism in Expectation

1 code implementation22 Feb 2017 Colin Raffel, Dieterich Lawson

We describe a mechanism for subsampling sequences and show how to compute its expected output so that it can be trained with standard backpropagation.

Cannot find the paper you are looking for? You can Submit a new open access paper.