no code implementations • ICML 2020 • Matthew Hoffman, Yi-An Ma
Variational inference (VI) and Markov chain Monte Carlo (MCMC) are approximate posterior inference algorithms that are often said to have complementary strengths, with VI being fast but biased and MCMC being slower but asymptotically unbiased.
1 code implementation • 12 Mar 2024 • Feras Saad, Jacob Burnim, Colin Carroll, Brian Patton, Urs Köster, Rif A. Saurous, Matthew Hoffman
Spatiotemporal datasets, which consist of spatially-referenced time series, are ubiquitous in many scientific and business-intelligence applications, such as air pollution monitoring, disease tracking, and cloud-demand forecasting.
no code implementations • 16 Oct 2022 • Aneesh Rangnekar, Christopher Kanan, Matthew Hoffman
We achieve more than 95% of the network's performance on CamVid and CityScapes datasets, utilizing only 12. 1% and 15. 1% of the labeled data, respectively.
no code implementations • 21 Mar 2022 • Aneesh Rangnekar, Christopher Kanan, Matthew Hoffman
Instead, our active learning approach aims to minimize the number of annotations per-image.
no code implementations • 17 Mar 2021 • Caglar Gulcehre, Sergio Gómez Colmenarejo, Ziyu Wang, Jakub Sygnowski, Thomas Paine, Konrad Zolna, Yutian Chen, Matthew Hoffman, Razvan Pascanu, Nando de Freitas
Due to bootstrapping, these errors get amplified during training and can lead to divergence, thereby crippling learning.
no code implementations • 1 Jan 2021 • Caglar Gulcehre, Sergio Gómez Colmenarejo, Ziyu Wang, Jakub Sygnowski, Thomas Paine, Konrad Zolna, Yutian Chen, Matthew Hoffman, Razvan Pascanu, Nando de Freitas
These errors can be compounded by bootstrapping when the function approximator overestimates, leading the value function to *grow unbounded*, thereby crippling learning.
1 code implementation • NeurIPS 2020 • Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Thomas Paine, Sergio Gómez, Konrad Zolna, Rishabh Agarwal, Josh S. Merel, Daniel J. Mankowitz, Cosmin Paduraru, Gabriel Dulac-Arnold, Jerry Li, Mohammad Norouzi, Matthew Hoffman, Nicolas Heess, Nando de Freitas
We hope that our suite of benchmarks will increase the reproducibility of experiments and make it possible to study challenging tasks with a limited computational budget, thus making RL research both more systematic and more accessible across the community.
1 code implementation • 9 Mar 2019 • Matthew Hoffman, Pavel Sountsov, Joshua V. Dillon, Ian Langmore, Dustin Tran, Srinivas Vasudevan
Hamiltonian Monte Carlo is a powerful algorithm for sampling from difficult-to-normalize posterior distributions.
1 code implementation • NeurIPS 2018 • Dustin Tran, Matthew Hoffman, Dave Moore, Christopher Suter, Srinivas Vasudevan, Alexey Radul, Matthew Johnson, Rif A. Saurous
For both a state-of-the-art VAE on 64x64 ImageNet and Image Transformer on 256x256 CelebA-HQ, our approach achieves an optimal linear speedup from 1 to 256 TPUv2 chips.
no code implementations • 23 Dec 2017 • Aneesh Rangnekar, Nilay Mokashi, Emmett Ientilucci, Christopher Kanan, Matthew Hoffman
In contrast to the spectra of ground based images, aerial spectral images have low spatial resolution and suffer from higher noise interference.
no code implementations • ICLR 2018 • Jesse Engel, Matthew Hoffman, Adam Roberts
Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions.
1 code implementation • 17 Oct 2017 • Rahul G. Krishnan, Dawen Liang, Matthew Hoffman
We study parameter estimation in Nonlinear Factor Analysis (NFA) where the generative model is parameterized by a deep neural network.
no code implementations • 20 Feb 2016 • Ardavan Saeedi, Matthew Hoffman, Matthew Johnson, Ryan Adams
We propose the segmented iHMM (siHMM), a hierarchical infinite hidden Markov model (iHMM) that supports a simple, efficient inference scheme.
3 code implementations • 21 Dec 2014 • Forest Agostinelli, Matthew Hoffman, Peter Sadowski, Pierre Baldi
Artificial neural networks typically have a fixed, non-linear activation function at each neuron.
Ranked #168 on Image Classification on CIFAR-100 (using extra training data)
no code implementations • NeurIPS 2010 • Matthew Hoffman, Francis R. Bach, David M. Blei
We develop an online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA).