Search Results for author: Matthew Hoffman

Found 15 papers, 6 papers with code

Black-Box Variational Inference as a Parametric Approximation to Langevin Dynamics

no code implementations ICML 2020 Matthew Hoffman, Yi-An Ma

Variational inference (VI) and Markov chain Monte Carlo (MCMC) are approximate posterior inference algorithms that are often said to have complementary strengths, with VI being fast but biased and MCMC being slower but asymptotically unbiased.

Variational Inference

Scalable Spatiotemporal Prediction with Bayesian Neural Fields

1 code implementation12 Mar 2024 Feras Saad, Jacob Burnim, Colin Carroll, Brian Patton, Urs Köster, Rif A. Saurous, Matthew Hoffman

Spatiotemporal datasets, which consist of spatially-referenced time series, are ubiquitous in many scientific and business-intelligence applications, such as air pollution monitoring, disease tracking, and cloud-demand forecasting.

Bayesian Inference Uncertainty Quantification

Semantic Segmentation with Active Semi-Supervised Representation Learning

no code implementations16 Oct 2022 Aneesh Rangnekar, Christopher Kanan, Matthew Hoffman

We achieve more than 95% of the network's performance on CamVid and CityScapes datasets, utilizing only 12. 1% and 15. 1% of the labeled data, respectively.

Active Learning Contrastive Learning +4

Regularized Behavior Value Estimation

no code implementations17 Mar 2021 Caglar Gulcehre, Sergio Gómez Colmenarejo, Ziyu Wang, Jakub Sygnowski, Thomas Paine, Konrad Zolna, Yutian Chen, Matthew Hoffman, Razvan Pascanu, Nando de Freitas

Due to bootstrapping, these errors get amplified during training and can lead to divergence, thereby crippling learning.

Offline RL

Addressing Extrapolation Error in Deep Offline Reinforcement Learning

no code implementations1 Jan 2021 Caglar Gulcehre, Sergio Gómez Colmenarejo, Ziyu Wang, Jakub Sygnowski, Thomas Paine, Konrad Zolna, Yutian Chen, Matthew Hoffman, Razvan Pascanu, Nando de Freitas

These errors can be compounded by bootstrapping when the function approximator overestimates, leading the value function to *grow unbounded*, thereby crippling learning.

Offline RL reinforcement-learning +1

RL Unplugged: A Collection of Benchmarks for Offline Reinforcement Learning

1 code implementation NeurIPS 2020 Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Thomas Paine, Sergio Gómez, Konrad Zolna, Rishabh Agarwal, Josh S. Merel, Daniel J. Mankowitz, Cosmin Paduraru, Gabriel Dulac-Arnold, Jerry Li, Mohammad Norouzi, Matthew Hoffman, Nicolas Heess, Nando de Freitas

We hope that our suite of benchmarks will increase the reproducibility of experiments and make it possible to study challenging tasks with a limited computational budget, thus making RL research both more systematic and more accessible across the community.

Offline RL reinforcement-learning +1

NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport

1 code implementation9 Mar 2019 Matthew Hoffman, Pavel Sountsov, Joshua V. Dillon, Ian Langmore, Dustin Tran, Srinivas Vasudevan

Hamiltonian Monte Carlo is a powerful algorithm for sampling from difficult-to-normalize posterior distributions.

Variational Inference

Simple, Distributed, and Accelerated Probabilistic Programming

1 code implementation NeurIPS 2018 Dustin Tran, Matthew Hoffman, Dave Moore, Christopher Suter, Srinivas Vasudevan, Alexey Radul, Matthew Johnson, Rif A. Saurous

For both a state-of-the-art VAE on 64x64 ImageNet and Image Transformer on 256x256 CelebA-HQ, our approach achieves an optimal linear speedup from 1 to 256 TPUv2 chips.

Probabilistic Programming

Aerial Spectral Super-Resolution using Conditional Adversarial Networks

no code implementations23 Dec 2017 Aneesh Rangnekar, Nilay Mokashi, Emmett Ientilucci, Christopher Kanan, Matthew Hoffman

In contrast to the spectra of ground based images, aerial spectral images have low spatial resolution and suffer from higher noise interference.

Spectral Super-Resolution Super-Resolution

Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models

no code implementations ICLR 2018 Jesse Engel, Matthew Hoffman, Adam Roberts

Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions.

Attribute

On the challenges of learning with inference networks on sparse, high-dimensional data

1 code implementation17 Oct 2017 Rahul G. Krishnan, Dawen Liang, Matthew Hoffman

We study parameter estimation in Nonlinear Factor Analysis (NFA) where the generative model is parameterized by a deep neural network.

Variational Inference

The Segmented iHMM: A Simple, Efficient Hierarchical Infinite HMM

no code implementations20 Feb 2016 Ardavan Saeedi, Matthew Hoffman, Matthew Johnson, Ryan Adams

We propose the segmented iHMM (siHMM), a hierarchical infinite hidden Markov model (iHMM) that supports a simple, efficient inference scheme.

Segmentation Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.