Search Results for author: Jasper Snoek

Found 39 papers, 19 papers with code

A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness

1 code implementation1 May 2022 Jeremiah Zhe Liu, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zack Nado, Jasper Snoek, Dustin Tran, Balaji Lakshminarayanan

The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles.

Data Augmentation Probabilistic Deep Learning

Predicting the utility of search spaces for black-box optimization:a simple, budget-aware approach

no code implementations15 Dec 2021 Setareh Ariafar, Justin Gilmer, Zack Nado, Jasper Snoek, Rodolphe Jenatton, George E. Dahl

For example, when tuning hyperparameters for machine learning pipelines on a new problem given a limited budget, one must strike a balance between excluding potentially promising regions and keeping the search space small enough to be tractable.

Pre-training helps Bayesian optimization too

2 code implementations16 Sep 2021 Zi Wang, George E. Dahl, Kevin Swersky, Chansoo Lee, Zelda Mariet, Zachary Nado, Justin Gilmer, Jasper Snoek, Zoubin Ghahramani

Contrary to a common belief that BO is suited to optimizing black-box functions, it actually requires domain knowledge on characteristics of those functions to deploy BO successfully.

Deep Learning for Bayesian Optimization of Scientific Problems with High-Dimensional Structure

1 code implementation23 Apr 2021 Samuel Kim, Peter Y. Lu, Charlotte Loh, Jamie Smith, Jasper Snoek, Marin Soljačić

Bayesian optimization (BO) is a popular paradigm for global optimization of expensive black-box functions, but there are many domains where the function is not completely black-box.

Gaussian Processes

Exploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit

no code implementations ICLR 2021 Ben Adlam, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, Jasper Snoek

This gives us a better understanding of the implicit prior NNs place on function space and allows a direct comparison of the calibration of the NNGP and its finite-width analogue.

General Classification Multi-class Classification +1

Combining Ensembles and Data Augmentation can Harm your Calibration

no code implementations ICLR 2021 Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael W. Dusenberry, Jasper Snoek, Balaji Lakshminarayanan, Dustin Tran

Ensemble methods which average over multiple neural network predictions are a simple approach to improve a model's calibration and robustness.

Data Augmentation

Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit

1 code implementation14 Oct 2020 Ben Adlam, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, Jasper Snoek

This gives us a better understanding of the implicit prior NNs place on function space and allows a direct comparison of the calibration of the NNGP and its finite-width analogue.

General Classification Multi-class Classification +1

Training independent subnetworks for robust prediction

1 code implementation ICLR 2021 Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, Dustin Tran

Recent approaches to efficiently ensemble neural networks have shown that strong robustness and uncertainty performance can be achieved with a negligible gain in parameters over the original network.

Empirical Frequentist Coverage of Deep Learning Uncertainty Quantification Procedures

no code implementations6 Oct 2020 Benjamin Kompa, Jasper Snoek, Andrew Beam

Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings.

A Spectral Energy Distance for Parallel Speech Synthesis

2 code implementations NeurIPS 2020 Alexey A. Gritsenko, Tim Salimans, Rianne van den Berg, Jasper Snoek, Nal Kalchbrenner

Speech synthesis is an important practical generative modeling problem that has seen great progress over the last few years, with likelihood-based autoregressive neural models now outperforming traditional concatenative systems.

Speech Synthesis

Cold Posteriors and Aleatoric Uncertainty

no code implementations31 Jul 2020 Ben Adlam, Jasper Snoek, Samuel L. Smith

Recent work has observed that one can outperform exact inference in Bayesian neural networks by tuning the "temperature" of the posterior on a validation set (the "cold posterior" effect).

Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks

no code implementations10 Jul 2020 Shreyas Padhy, Zachary Nado, Jie Ren, Jeremiah Liu, Jasper Snoek, Balaji Lakshminarayanan

Accurate estimation of predictive uncertainty in modern neural networks is critical to achieve well calibrated predictions and detect out-of-distribution (OOD) inputs.

OOD Detection Out-of-Distribution Detection

Hyperparameter Ensembles for Robustness and Uncertainty Quantification

2 code implementations NeurIPS 2020 Florian Wenzel, Jasper Snoek, Dustin Tran, Rodolphe Jenatton

Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration.

Image Classification

Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift

no code implementations19 Jun 2020 Zachary Nado, Shreyas Padhy, D. Sculley, Alexander D'Amour, Balaji Lakshminarayanan, Jasper Snoek

Using this one line code change, we achieve state-of-the-art on recent covariate shift benchmarks and an mCE of 60. 28\% on the challenging ImageNet-C dataset; to our knowledge, this is the best result for any model that does not incorporate additional data augmentation or modification of the training pipeline.

Data Augmentation

Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors

1 code implementation ICML 2020 Michael W. Dusenberry, Ghassen Jerfel, Yeming Wen, Yi-An Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan, Dustin Tran

Bayesian neural networks (BNNs) demonstrate promising success in improving the robustness and uncertainty quantification of modern deep learning.

Weighting Is Worth the Wait: Bayesian Optimization with Importance Sampling

no code implementations23 Feb 2020 Setareh Ariafar, Zelda Mariet, Ehsan Elhamifar, Dana Brooks, Jennifer Dy, Jasper Snoek

Casting hyperparameter search as a multi-task Bayesian optimization problem over both hyperparameters and importance sampling design achieves the best of both worlds: by learning a parameterization of IS that trades-off evaluation complexity and quality, we improve upon Bayesian optimization state-of-the-art runtime and final validation error across a variety of datasets and complex neural architectures.

How Good is the Bayes Posterior in Deep Neural Networks Really?

1 code implementation ICML 2020 Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin

In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD.

Bayesian Inference

Refining the variational posterior through iterative optimization

no code implementations25 Sep 2019 Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon, José Miguel Hernández-Lobato

Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks.

Bayesian Inference Variational Inference

Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

2 code implementations NeurIPS 2019 Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, Jasper Snoek

Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}.

Probabilistic Deep Learning

DPPNet: Approximating Determinantal Point Processes with Deep Networks

no code implementations ICLR 2019 Zelda Mariet, Yaniv Ovadia, Jasper Snoek

Determinantal Point Processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items.

Point Processes

Avoiding a Tragedy of the Commons in the Peer Review Process

no code implementations18 Dec 2018 D. Sculley, Jasper Snoek, Alex Wiltschko

In this position paper, we argue that a tragedy of the commons outcome may be avoided by emphasizing the professional aspects of this service.

Learning Latent Permutations with Gumbel-Sinkhorn Networks

2 code implementations ICLR 2018 Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek

Permutations and matchings are core building blocks in a variety of latent variable models, as they allow us to align, canonicalize, and sort data.

Spectral Representations for Convolutional Neural Networks

no code implementations NeurIPS 2015 Oren Rippel, Jasper Snoek, Ryan P. Adams

In this work, we demonstrate that, beyond its advantages for efficient computation, the spectral domain also provides a powerful representation in which to model and train convolutional neural networks (CNNs).

Dimensionality Reduction

Raiders of the Lost Architecture: Kernels for Bayesian Optimization in Conditional Parameter Spaces

no code implementations14 Sep 2014 Kevin Swersky, David Duvenaud, Jasper Snoek, Frank Hutter, Michael A. Osborne

In practical Bayesian optimization, we must often search over structures with differing numbers of parameters.

Freeze-Thaw Bayesian Optimization

no code implementations16 Jun 2014 Kevin Swersky, Jasper Snoek, Ryan Prescott Adams

In this paper we develop a dynamic form of Bayesian optimization for machine learning models with the goal of rapidly finding good hyperparameter settings.

Bayesian Optimization with Unknown Constraints

1 code implementation22 Mar 2014 Michael A. Gelbart, Jasper Snoek, Ryan P. Adams

Recent work on Bayesian optimization has shown its effectiveness in global optimization of difficult black-box objective functions.

Input Warping for Bayesian Optimization of Non-stationary Functions

1 code implementation5 Feb 2014 Jasper Snoek, Kevin Swersky, Richard S. Zemel, Ryan P. Adams

Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions.

Gaussian Processes

A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

no code implementations NeurIPS 2013 Jasper Snoek, Richard Zemel, Ryan P. Adams

Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials.

Hippocampus Point Processes

Multi-Task Bayesian Optimization

1 code implementation NeurIPS 2013 Kevin Swersky, Jasper Snoek, Ryan P. Adams

We demonstrate the utility of this new acquisition function by utilizing a small dataset in order to explore hyperparameter settings for a large dataset.

Gaussian Processes Image Classification

Practical Bayesian Optimization of Machine Learning Algorithms

4 code implementations NeurIPS 2012 Jasper Snoek, Hugo Larochelle, Ryan P. Adams

In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP).

Hyperparameter Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.