Search Results for author: Jasper Snoek

Found 43 papers, 25 papers with code

Practical Bayesian Optimization of Machine Learning Algorithms

4 code implementations NeurIPS 2012 Jasper Snoek, Hugo Larochelle, Ryan P. Adams

In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP).

Bayesian Optimization BIG-bench Machine Learning +1

Multi-Task Bayesian Optimization

1 code implementation NeurIPS 2013 Kevin Swersky, Jasper Snoek, Ryan P. Adams

We demonstrate the utility of this new acquisition function by utilizing a small dataset in order to explore hyperparameter settings for a large dataset.

Bayesian Optimization Gaussian Processes +1

A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

no code implementations NeurIPS 2013 Jasper Snoek, Richard Zemel, Ryan P. Adams

Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials.

Hippocampus Point Processes +1

Input Warping for Bayesian Optimization of Non-stationary Functions

1 code implementation5 Feb 2014 Jasper Snoek, Kevin Swersky, Richard S. Zemel, Ryan P. Adams

Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions.

Bayesian Optimization Gaussian Processes

Bayesian Optimization with Unknown Constraints

1 code implementation22 Mar 2014 Michael A. Gelbart, Jasper Snoek, Ryan P. Adams

Recent work on Bayesian optimization has shown its effectiveness in global optimization of difficult black-box objective functions.

Bayesian Optimization

Freeze-Thaw Bayesian Optimization

1 code implementation16 Jun 2014 Kevin Swersky, Jasper Snoek, Ryan Prescott Adams

In this paper we develop a dynamic form of Bayesian optimization for machine learning models with the goal of rapidly finding good hyperparameter settings.

Bayesian Optimization BIG-bench Machine Learning

Spectral Representations for Convolutional Neural Networks

no code implementations NeurIPS 2015 Oren Rippel, Jasper Snoek, Ryan P. Adams

In this work, we demonstrate that, beyond its advantages for efficient computation, the spectral domain also provides a powerful representation in which to model and train convolutional neural networks (CNNs).

Dimensionality Reduction

Learning Latent Permutations with Gumbel-Sinkhorn Networks

2 code implementations ICLR 2018 Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek

Permutations and matchings are core building blocks in a variety of latent variable models, as they allow us to align, canonicalize, and sort data.

Avoiding a Tragedy of the Commons in the Peer Review Process

no code implementations18 Dec 2018 D. Sculley, Jasper Snoek, Alex Wiltschko

In this position paper, we argue that a tragedy of the commons outcome may be avoided by emphasizing the professional aspects of this service.

Position

DPPNet: Approximating Determinantal Point Processes with Deep Networks

no code implementations ICLR 2019 Zelda Mariet, Yaniv Ovadia, Jasper Snoek

Determinantal Point Processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items.

Point Processes

Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

2 code implementations NeurIPS 2019 Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, Jasper Snoek

Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}.

Probabilistic Deep Learning

Refining the variational posterior through iterative optimization

no code implementations25 Sep 2019 Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon, José Miguel Hernández-Lobato

Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks.

Bayesian Inference Variational Inference

How Good is the Bayes Posterior in Deep Neural Networks Really?

1 code implementation ICML 2020 Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin

In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD.

Bayesian Inference Uncertainty Quantification

Weighting Is Worth the Wait: Bayesian Optimization with Importance Sampling

no code implementations23 Feb 2020 Setareh Ariafar, Zelda Mariet, Ehsan Elhamifar, Dana Brooks, Jennifer Dy, Jasper Snoek

Casting hyperparameter search as a multi-task Bayesian optimization problem over both hyperparameters and importance sampling design achieves the best of both worlds: by learning a parameterization of IS that trades-off evaluation complexity and quality, we improve upon Bayesian optimization state-of-the-art runtime and final validation error across a variety of datasets and complex neural architectures.

Bayesian Optimization

Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors

1 code implementation ICML 2020 Michael W. Dusenberry, Ghassen Jerfel, Yeming Wen, Yi-An Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan, Dustin Tran

Bayesian neural networks (BNNs) demonstrate promising success in improving the robustness and uncertainty quantification of modern deep learning.

Uncertainty Quantification

Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift

no code implementations19 Jun 2020 Zachary Nado, Shreyas Padhy, D. Sculley, Alexander D'Amour, Balaji Lakshminarayanan, Jasper Snoek

Using this one line code change, we achieve state-of-the-art on recent covariate shift benchmarks and an mCE of 60. 28\% on the challenging ImageNet-C dataset; to our knowledge, this is the best result for any model that does not incorporate additional data augmentation or modification of the training pipeline.

Data Augmentation

Hyperparameter Ensembles for Robustness and Uncertainty Quantification

3 code implementations NeurIPS 2020 Florian Wenzel, Jasper Snoek, Dustin Tran, Rodolphe Jenatton

Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration.

Image Classification Uncertainty Quantification

Cold Posteriors and Aleatoric Uncertainty

no code implementations31 Jul 2020 Ben Adlam, Jasper Snoek, Samuel L. Smith

Recent work has observed that one can outperform exact inference in Bayesian neural networks by tuning the "temperature" of the posterior on a validation set (the "cold posterior" effect).

valid

A Spectral Energy Distance for Parallel Speech Synthesis

2 code implementations NeurIPS 2020 Alexey A. Gritsenko, Tim Salimans, Rianne van den Berg, Jasper Snoek, Nal Kalchbrenner

Speech synthesis is an important practical generative modeling problem that has seen great progress over the last few years, with likelihood-based autoregressive neural models now outperforming traditional concatenative systems.

Speech Synthesis

Empirical Frequentist Coverage of Deep Learning Uncertainty Quantification Procedures

1 code implementation6 Oct 2020 Benjamin Kompa, Jasper Snoek, Andrew Beam

Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings.

Uncertainty Quantification

Training independent subnetworks for robust prediction

2 code implementations ICLR 2021 Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, Dustin Tran

Recent approaches to efficiently ensemble neural networks have shown that strong robustness and uncertainty performance can be achieved with a negligible gain in parameters over the original network.

Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit

1 code implementation14 Oct 2020 Ben Adlam, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, Jasper Snoek

This gives us a better understanding of the implicit prior NNs place on function space and allows a direct comparison of the calibration of the NNGP and its finite-width analogue.

General Classification Multi-class Classification +1

Combining Ensembles and Data Augmentation can Harm your Calibration

no code implementations ICLR 2021 Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael W. Dusenberry, Jasper Snoek, Balaji Lakshminarayanan, Dustin Tran

Ensemble methods which average over multiple neural network predictions are a simple approach to improve a model's calibration and robustness.

Data Augmentation

Exploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit

no code implementations ICLR 2021 Ben Adlam, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, Jasper Snoek

This gives us a better understanding of the implicit prior NNs place on function space and allows a direct comparison of the calibration of the NNGP and its finite-width analogue.

General Classification Multi-class Classification +1

Deep Learning for Bayesian Optimization of Scientific Problems with High-Dimensional Structure

2 code implementations23 Apr 2021 Samuel Kim, Peter Y. Lu, Charlotte Loh, Jamie Smith, Jasper Snoek, Marin Soljačić

Bayesian optimization (BO) is a popular paradigm for global optimization of expensive black-box functions, but there are many domains where the function is not completely a black-box.

Bayesian Optimization Gaussian Processes

Pre-trained Gaussian processes for Bayesian optimization

4 code implementations16 Sep 2021 Zi Wang, George E. Dahl, Kevin Swersky, Chansoo Lee, Zachary Nado, Justin Gilmer, Jasper Snoek, Zoubin Ghahramani

Contrary to a common expectation that BO is suited to optimizing black-box functions, it actually requires domain knowledge about those functions to deploy BO successfully.

Bayesian Optimization Gaussian Processes

Sparse MoEs meet Efficient Ensembles

1 code implementation7 Oct 2021 James Urquhart Allingham, Florian Wenzel, Zelda E Mariet, Basil Mustafa, Joan Puigcerver, Neil Houlsby, Ghassen Jerfel, Vincent Fortuin, Balaji Lakshminarayanan, Jasper Snoek, Dustin Tran, Carlos Riquelme Ruiz, Rodolphe Jenatton

Machine learning models based on the aggregated outputs of submodels, either at the activation or prediction levels, often exhibit strong performance compared to individual models.

Few-Shot Learning

Predicting the utility of search spaces for black-box optimization:a simple, budget-aware approach

no code implementations15 Dec 2021 Setareh Ariafar, Justin Gilmer, Zack Nado, Jasper Snoek, Rodolphe Jenatton, George E. Dahl

For example, when tuning hyperparameters for machine learning pipelines on a new problem given a limited budget, one must strike a balance between excluding potentially promising regions and keeping the search space small enough to be tractable.

Bayesian Optimization

A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness

2 code implementations1 May 2022 Jeremiah Zhe Liu, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zack Nado, Jasper Snoek, Dustin Tran, Balaji Lakshminarayanan

The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles.

Data Augmentation Probabilistic Deep Learning +1

Pre-training helps Bayesian optimization too

1 code implementation7 Jul 2022 Zi Wang, George E. Dahl, Kevin Swersky, Chansoo Lee, Zelda Mariet, Zachary Nado, Justin Gilmer, Jasper Snoek, Zoubin Ghahramani

Contrary to a common belief that BO is suited to optimizing black-box functions, it actually requires domain knowledge on characteristics of those functions to deploy BO successfully.

Bayesian Optimization

Kernel Regression with Infinite-Width Neural Networks on Millions of Examples

no code implementations9 Mar 2023 Ben Adlam, Jaehoon Lee, Shreyas Padhy, Zachary Nado, Jasper Snoek

Using this approach, we study scaling laws of several neural kernels across many orders of magnitude for the CIFAR-5m dataset.

Data Augmentation regression

Cannot find the paper you are looking for? You can Submit a new open access paper.