no code implementations • 4 Sep 2024 • Vighnesh Birodkar, Gabriel Barcik, James Lyon, Sergey Ioffe, David Minnen, Joshua V. Dillon
Our work combines autoencoder representation learning with diffusion and is, to our knowledge, the first to demonstrate the efficacy of jointly learning a continuous encoder and decoder under a diffusion-based loss.
no code implementations • 23 May 2023 • Elahe Vedadi, Joshua V. Dillon, Philip Andrew Mansfield, Karan Singhal, Arash Afkanpour, Warren Richard Morningstar
We then approximate this process using Variational Inference to train our model efficiently.
1 code implementation • 22 Dec 2022 • Matthew Streeter, Joshua V. Dillon
We then recursively combine the bounds for the elementary functions using an interval arithmetic variant of Taylor-mode automatic differentiation.
no code implementations • 18 Nov 2022 • Yangjun Ruan, Saurabh Singh, Warren Morningstar, Alexander A. Alemi, Sergey Ioffe, Ian Fischer, Joshua V. Dillon
Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning.
no code implementations • 19 Oct 2020 • Warren R. Morningstar, Alexander A. Alemi, Joshua V. Dillon
The Bayesian posterior minimizes the "inferential risk" which itself bounds the "predictive risk".
no code implementations • 16 Jun 2020 • Warren R. Morningstar, Cusuh Ham, Andrew G. Gallagher, Balaji Lakshminarayanan, Alexander A. Alemi, Joshua V. Dillon
Drawing on the statistical physics notion of ``density of states,'' the DoSE decision rule avoids direct comparison of model probabilities, and instead utilizes the ``probability of the model probability,'' or indeed the frequency of any reasonable statistic.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +1
no code implementations • 3 Mar 2020 • Warren R. Morningstar, Sharad M. Vikram, Cusuh Ham, Andrew Gallagher, Joshua V. Dillon
Automatic Differentiation Variational Inference (ADVI) is a useful tool for efficiently learning probabilistic models in machine learning.
no code implementations • ICML 2020 • Jakub Swiatkowski, Kevin Roth, Bastiaan S. Veeling, Linh Tran, Joshua V. Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin
Variational Bayesian Inference is a popular methodology for approximating posterior distributions over Bayesian neural network weights.
no code implementations • 4 Feb 2020 • Junpeng Lao, Christopher Suter, Ian Langmore, Cyril Chimisov, Ashish Saxena, Pavel Sountsov, Dave Moore, Rif A. Saurous, Matthew D. Hoffman, Joshua V. Dillon
Markov chain Monte Carlo (MCMC) is widely regarded as one of the most important algorithms of the 20th century.
1 code implementation • 22 Jan 2020 • Dan Piponi, Dave Moore, Joshua V. Dillon
A central tenet of probabilistic programming is that a model is specified exactly once in a canonical representation which is usable by inference algorithms.
1 code implementation • 14 Jan 2020 • Linh Tran, Bastiaan S. Veeling, Kevin Roth, Jakub Swiatkowski, Joshua V. Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Sebastian Nowozin, Rodolphe Jenatton
As a result, the diversity of the ensemble predictions, stemming from each member, is lost.
no code implementations • 25 Sep 2019 • Jakub Świątkowski, Kevin Roth, Bastiaan S. Veeling, Linh Tran, Joshua V. Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin
Variational Bayesian Inference is a popular methodology for approximating posterior distributions in Bayesian neural networks.
4 code implementations • NeurIPS 2019 • Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, Balaji Lakshminarayanan
We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
2 code implementations • NeurIPS 2019 • Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, Jasper Snoek
Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}.
1 code implementation • 9 Mar 2019 • Matthew Hoffman, Pavel Sountsov, Joshua V. Dillon, Ian Langmore, Dustin Tran, Srinivas Vasudevan
Hamiltonian Monte Carlo is a powerful algorithm for sampling from difficult-to-normalize posterior distributions.
no code implementations • 2 Jul 2018 • Alexander A. Alemi, Ian Fischer, Joshua V. Dillon
We present a simple case study, demonstrating that Variational Information Bottleneck (VIB) can improve a network's classification calibration as well as its ability to detect out-of-distribution data.
9 code implementations • 28 Nov 2017 • Joshua V. Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, Rif A. Saurous
The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation.
1 code implementation • ICML 2018 • Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, Kevin Murphy
Recent work in unsupervised representation learning has focused on learning deep directed latent-variable models.
9 code implementations • 1 Dec 2016 • Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy
We present a variational approximation to the information bottleneck of Tishby et al. (1999).