Search Results for author: Joshua V. Dillon

Found 19 papers, 9 papers with code

Sample what you cant compress

no code implementations4 Sep 2024 Vighnesh Birodkar, Gabriel Barcik, James Lyon, Sergey Ioffe, David Minnen, Joshua V. Dillon

Our work combines autoencoder representation learning with diffusion and is, to our knowledge, the first to demonstrate the efficacy of jointly learning a continuous encoder and decoder under a diffusion-based loss.

Decoder Representation Learning +1

Automatically Bounding the Taylor Remainder Series: Tighter Bounds and New Applications

1 code implementation22 Dec 2022 Matthew Streeter, Joshua V. Dillon

We then recursively combine the bounds for the elementary functions using an interval arithmetic variant of Taylor-mode automatic differentiation.

Numerical Integration

Weighted Ensemble Self-Supervised Learning

no code implementations18 Nov 2022 Yangjun Ruan, Saurabh Singh, Warren Morningstar, Alexander A. Alemi, Sergey Ioffe, Ian Fischer, Joshua V. Dillon

Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning.

Diversity Self-Supervised Learning

PAC$^m$-Bayes: Narrowing the Empirical Risk Gap in the Misspecified Bayesian Regime

no code implementations19 Oct 2020 Warren R. Morningstar, Alexander A. Alemi, Joshua V. Dillon

The Bayesian posterior minimizes the "inferential risk" which itself bounds the "predictive risk".

Density of States Estimation for Out-of-Distribution Detection

no code implementations16 Jun 2020 Warren R. Morningstar, Cusuh Ham, Andrew G. Gallagher, Balaji Lakshminarayanan, Alexander A. Alemi, Joshua V. Dillon

Drawing on the statistical physics notion of ``density of states,'' the DoSE decision rule avoids direct comparison of model probabilities, and instead utilizes the ``probability of the model probability,'' or indeed the frequency of any reasonable statistic.

Out-of-Distribution Detection Out of Distribution (OOD) Detection +1

Automatic Differentiation Variational Inference with Mixtures

no code implementations3 Mar 2020 Warren R. Morningstar, Sharad M. Vikram, Cusuh Ham, Andrew Gallagher, Joshua V. Dillon

Automatic Differentiation Variational Inference (ADVI) is a useful tool for efficiently learning probabilistic models in machine learning.

Variational Inference

Joint Distributions for TensorFlow Probability

1 code implementation22 Jan 2020 Dan Piponi, Dave Moore, Joshua V. Dillon

A central tenet of probabilistic programming is that a model is specified exactly once in a canonical representation which is usable by inference algorithms.

Probabilistic Programming

Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

2 code implementations NeurIPS 2019 Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, Jasper Snoek

Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}.

Probabilistic Deep Learning

NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport

1 code implementation9 Mar 2019 Matthew Hoffman, Pavel Sountsov, Joshua V. Dillon, Ian Langmore, Dustin Tran, Srinivas Vasudevan

Hamiltonian Monte Carlo is a powerful algorithm for sampling from difficult-to-normalize posterior distributions.

Variational Inference

Uncertainty in the Variational Information Bottleneck

no code implementations2 Jul 2018 Alexander A. Alemi, Ian Fischer, Joshua V. Dillon

We present a simple case study, demonstrating that Variational Information Bottleneck (VIB) can improve a network's classification calibration as well as its ability to detect out-of-distribution data.

General Classification

TensorFlow Distributions

9 code implementations28 Nov 2017 Joshua V. Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, Rif A. Saurous

The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation.

Deep Learning Probabilistic Programming

Fixing a Broken ELBO

1 code implementation ICML 2018 Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, Kevin Murphy

Recent work in unsupervised representation learning has focused on learning deep directed latent-variable models.

Representation Learning

Deep Variational Information Bottleneck

9 code implementations1 Dec 2016 Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy

We present a variational approximation to the information bottleneck of Tishby et al. (1999).

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.