Search Results for author: Andrew Gordon Wilson

Found 75 papers, 56 papers with code

Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations

1 code implementation6 Apr 2022 Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson

Neural network classifiers can largely rely on simple spurious features, such as backgrounds, to make predictions.

On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification

1 code implementation30 Mar 2022 Sanyam Kapoor, Wesley J. Maddox, Pavel Izmailov, Andrew Gordon Wilson

In Bayesian regression, we often use a Gaussian observation model, where we control the level of aleatoric uncertainty with a noise variance parameter.

Classification Data Augmentation

Bayesian Model Selection, the Marginal Likelihood, and Generalization

1 code implementation23 Feb 2022 Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Gordon Wilson

We provide a partial remedy through a conditional marginal likelihood, which we show is more aligned with generalization, and practically valuable for large-scale hyperparameter learning, such as in deep kernel learning.

Model Selection Neural Architecture Search

Deconstructing the Inductive Biases of Hamiltonian Neural Networks

1 code implementation ICLR 2022 Nate Gruver, Marc Finzi, Samuel Stanton, Andrew Gordon Wilson

Physics-inspired neural networks (NNs), such as Hamiltonian or Lagrangian NNs, dramatically outperform other learned dynamics models by leveraging strong inductive biases.

When are Iterative Gaussian Processes Reliably Accurate?

1 code implementation31 Dec 2021 Wesley J. Maddox, Sanyam Kapoor, Andrew Gordon Wilson

While recent work on conjugate gradient methods and Lanczos decompositions have achieved scalable Gaussian process inference with highly accurate point predictions, in several implementations these iterative methods appear to struggle with numerical instabilities in learning kernel hyperparameters, and poor test likelihoods.

Gaussian Processes

Residual Pathway Priors for Soft Equivariance Constraints

1 code implementation NeurIPS 2021 Marc Finzi, Gregory Benton, Andrew Gordon Wilson

There is often a trade-off between building deep learning systems that are expressive enough to capture the nuances of the reality, and having the right inductive biases for efficient learning.

Conditioning Sparse Variational Gaussian Processes for Online Decision-making

1 code implementation NeurIPS 2021 Wesley J. Maddox, Samuel Stanton, Andrew Gordon Wilson

With a principled representation of uncertainty and closed form posterior updates, Gaussian processes (GPs) are a natural choice for online decision making.

Active Learning Decision Making +2

Low-Precision Stochastic Gradient Langevin Dynamics

no code implementations29 Sep 2021 Ruqi Zhang, Andrew Gordon Wilson, Christopher De Sa

Low-precision optimization is widely used to accelerate large-scale deep learning.

Quantization

Task-agnostic Continual Learning with Hybrid Probabilistic Models

no code implementations ICML Workshop INNF 2021 Polina Kirichenko, Mehrdad Farajtabar, Dushyant Rao, Balaji Lakshminarayanan, Nir Levine, Ang Li, Huiyi Hu, Andrew Gordon Wilson, Razvan Pascanu

Learning new tasks continuously without forgetting on a constantly changing data distribution is essential for real-world problems but extremely challenging for modern deep learning.

Anomaly Detection Continual Learning +1

Bayesian Optimization with High-Dimensional Outputs

2 code implementations NeurIPS 2021 Wesley J. Maddox, Maximilian Balandat, Andrew Gordon Wilson, Eytan Bakshy

However, the Gaussian Process (GP) models typically used as probabilistic surrogates for multi-task Bayesian Optimization scale poorly with the number of outcomes, greatly limiting applicability.

Dangers of Bayesian Model Averaging under Covariate Shift

1 code implementation NeurIPS 2021 Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, Andrew Gordon Wilson

Approximate Bayesian inference for neural networks is considered a robust alternative to standard training, often providing good performance on out-of-distribution data.

Bayesian Inference

SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes

1 code implementation12 Jun 2021 Sanyam Kapoor, Marc Finzi, Ke Alexander Wang, Andrew Gordon Wilson

State-of-the-art methods for scalable Gaussian processes use iterative algorithms, requiring fast matrix vector multiplies (MVMs) with the covariance kernel.

Gaussian Processes

Does Knowledge Distillation Really Work?

1 code implementation NeurIPS 2021 Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew Gordon Wilson

Knowledge distillation is a popular technique for training a small student network to emulate a larger teacher model, such as an ensemble of networks.

Knowledge Distillation

Scalable Variational Gaussian Processes via Harmonic Kernel Decomposition

2 code implementations10 Jun 2021 Shengyang Sun, Jiaxin Shi, Andrew Gordon Wilson, Roger Grosse

We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.

Gaussian Processes

What Are Bayesian Neural Network Posteriors Really Like?

3 code implementations29 Apr 2021 Pavel Izmailov, Sharad Vikram, Matthew D. Hoffman, Andrew Gordon Wilson

The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex.

Data Augmentation Variational Inference

A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups

4 code implementations19 Apr 2021 Marc Finzi, Max Welling, Andrew Gordon Wilson

Symmetries and equivariance are fundamental to the generalization of neural networks on domains such as images, graphs, and point clouds.

Translation

Fast Adaptation with Linearized Neural Networks

1 code implementation2 Mar 2021 Wesley J. Maddox, Shuai Tang, Pablo Garcia Moreno, Andrew Gordon Wilson, Andreas Damianou

The inductive biases of trained neural networks are difficult to understand and, consequently, to adapt to new settings.

Domain Adaptation Gaussian Processes +2

Kernel Interpolation for Scalable Online Gaussian Processes

1 code implementation2 Mar 2021 Samuel Stanton, Wesley J. Maddox, Ian Delbridge, Andrew Gordon Wilson

Gaussian processes (GPs) provide a gold standard for performance in online settings, such as sample-efficient control and black box optimization, where we need to update a posterior distribution as we acquire data in a sequential fashion.

Gaussian Processes

Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling

1 code implementation25 Feb 2021 Gregory W. Benton, Wesley J. Maddox, Sanae Lotfi, Andrew Gordon Wilson

In this paper, we show that there are mode-connecting simplicial complexes that form multi-dimensional manifolds of low loss, connecting many independently trained models.

Rethinking Parameter Counting: Effective Dimensionality Revisited

no code implementations1 Jan 2021 Gregory Benton, Wesley Maddox, Andrew Gordon Wilson

Neural networks appear to have mysterious generalization properties when using parameter counting as a proxy for complexity.

Model Selection

Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints

1 code implementation NeurIPS 2020 Marc Finzi, Ke Alexander Wang, Andrew Gordon Wilson

Reasoning about the physical world requires models that are endowed with the right inductive biases to learn the underlying dynamics.

Learning Invariances in Neural Networks

1 code implementation22 Oct 2020 Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson

Invariances to translations have imbued convolutional neural networks with powerful generalization properties.

Image Classification Molecular Property Prediction

On the model-based stochastic value gradient for continuous reinforcement learning

1 code implementation28 Aug 2020 Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson

For over a decade, model-based reinforcement learning has been seen as a way to leverage control-based domain knowledge to improve the sample-efficiency of reinforcement learning agents.

Continuous Control Model-based Reinforcement Learning +2

Improving GAN Training with Probability Ratio Clipping and Sample Reweighting

1 code implementation NeurIPS 2020 Yue Wu, Pan Zhou, Andrew Gordon Wilson, Eric P. Xing, Zhiting Hu

Despite success on a wide range of problems related to vision, generative adversarial networks (GANs) often suffer from inferior performance due to unstable training, especially for text generation.

Image Generation Style Transfer +1

Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited

1 code implementation4 Mar 2020 Wesley J. Maddox, Gregory Benton, Andrew Gordon Wilson

Neural networks appear to have mysterious generalization properties when using parameter counting as a proxy for complexity.

Model Selection

Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data

1 code implementation ICML 2020 Marc Finzi, Samuel Stanton, Pavel Izmailov, Andrew Gordon Wilson

The translation equivariance of convolutional layers enables convolutional neural networks to generalize well on image problems.

Translation

Bayesian Deep Learning and a Probabilistic Perspective of Generalization

1 code implementation NeurIPS 2020 Andrew Gordon Wilson, Pavel Izmailov

The key distinguishing property of a Bayesian approach is marginalization, rather than using a single setting of weights.

Gaussian Processes

The Case for Bayesian Deep Learning

no code implementations29 Jan 2020 Andrew Gordon Wilson

(3) The structure of neural networks gives rise to a structured prior in function space, which reflects the inductive biases of neural networks that help them generalize.

Bayesian Inference

Towards understanding the true loss surface of deep neural networks using random matrix theory and iterative spectral methods

no code implementations ICLR 2020 Diego Granziol, Timur Garipov, Dmitry Vetrov, Stefan Zohren, Stephen Roberts, Andrew Gordon Wilson

This approach is an order of magnitude faster than state-of-the-art methods for spectral visualization, and can be generically used to investigate the spectral properties of matrices in deep learning.

Randomly Projected Additive Gaussian Processes for Regression

1 code implementation ICML 2020 Ian A. Delbridge, David S. Bindel, Andrew Gordon Wilson

Surprisingly, we find that as the number of random projections increases, the predictive performance of this approach quickly converges to the performance of a kernel operating on the original full dimensional inputs, over a wide range of data sets, even if we are projecting into a single dimension.

Gaussian Processes Small Data Image Classification

Semi-Supervised Learning with Normalizing Flows

2 code implementations ICML 2020 Pavel Izmailov, Polina Kirichenko, Marc Finzi, Andrew Gordon Wilson

Normalizing flows transform a latent distribution through an invertible neural network for a flexible and pleasingly simple approach to generative modelling, while preserving an exact likelihood.

Semi-Supervised Image Classification Semi Supervised Text Classification +1

Function-Space Distributions over Kernels

1 code implementation NeurIPS 2019 Gregory W. Benton, Wesley J. Maddox, Jayson P. Salkey, Julio Albinati, Andrew Gordon Wilson

The resulting approach enables learning of rich representations, with support for any stationary kernel, uncertainty over the values of the kernel, and an interpretable specification of a prior directly over kernels, without requiring sophisticated initialization or manual intervention.

Gaussian Processes Representation Learning

BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization

1 code implementation NeurIPS 2020 Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, Eytan Bakshy

Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design.

Bayesian Optimisation Experimental Design

Subspace Inference for Bayesian Deep Learning

1 code implementation17 Jul 2019 Pavel Izmailov, Wesley J. Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson

Bayesian inference was once a gold standard for learning with neural networks, providing accurate full predictive distributions and well calibrated uncertainty.

Bayesian Inference Image Classification +1

Simple Black-box Adversarial Attacks

3 code implementations ICLR 2019 Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, Kilian Q. Weinberger

We propose an intriguingly simple method for the construction of adversarial images in the black-box setting.

SWALP : Stochastic Weight Averaging in Low-Precision Training

2 code implementations26 Apr 2019 Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, Christopher De Sa

Low precision operations can provide scalability, memory savings, portability, and energy efficiency.

Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning

no code implementations12 Mar 2019 Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, Andrew Gordon Wilson

Nonetheless, for hyperparameter tuning in deep neural networks, the time required to evaluate the validation error for even a few hyperparameter settings remains a bottleneck.

A Simple Baseline for Bayesian Uncertainty in Deep Learning

7 code implementations NeurIPS 2019 Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, Andrew Gordon Wilson

We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning.

Bayesian Inference Transfer Learning

Scaling Gaussian Process Regression with Derivatives

1 code implementation NeurIPS 2018 David Eriksson, Kun Dong, Eric Hans Lee, David Bindel, Andrew Gordon Wilson

Gaussian processes (GPs) with derivatives are useful in many applications, including Bayesian optimization, implicit surface reconstruction, and terrain reconstruction.

Dimensionality Reduction Gaussian Processes +1

Change Surfaces for Expressive Multidimensional Changepoints and Counterfactual Prediction

no code implementations28 Oct 2018 William Herlands, Daniel B. Neill, Hannes Nickisch, Andrew Gordon Wilson

We provide a model-agnostic formalization of change surfaces, illustrating how they can provide variable, heterogeneous, and non-monotonic rates of change across multiple dimensions.

GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration

2 code implementations NeurIPS 2018 Jacob R. Gardner, Geoff Pleiss, David Bindel, Kilian Q. Weinberger, Andrew Gordon Wilson

Despite advances in scalable models, the inference tools used for Gaussian processes (GPs) have yet to fully capitalize on developments in computing hardware.

Gaussian Processes

There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average

2 code implementations ICLR 2019 Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson

Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters.

Domain Adaptation Semi-Supervised Image Classification

Probabilistic FastText for Multi-Sense Word Embeddings

1 code implementation ACL 2018 Ben Athiwaratkun, Andrew Gordon Wilson, Anima Anandkumar

We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information.

Word Embeddings Word Similarity

Hierarchical Density Order Embeddings

2 code implementations ICLR 2018 Ben Athiwaratkun, Andrew Gordon Wilson

By representing words with probability densities rather than point vectors, probabilistic word embeddings can capture rich and interpretable semantic information and uncertainty.

Lexical Entailment Word Embeddings

Gaussian Process Subset Scanning for Anomalous Pattern Detection in Non-iid Data

no code implementations4 Apr 2018 William Herlands, Edward McFowland III, Andrew Gordon Wilson, Daniel B. Neill

We introduce methods for identifying anomalous patterns in non-iid data by combining Gaussian processes with novel log-likelihood ratio statistic and subset scanning techniques.

Gaussian Processes

Constant-Time Predictive Distributions for Gaussian Processes

1 code implementation ICML 2018 Geoff Pleiss, Jacob R. Gardner, Kilian Q. Weinberger, Andrew Gordon Wilson

One of the most compelling features of Gaussian process (GP) regression is its ability to provide well-calibrated posterior distributions.

Gaussian Processes

Averaging Weights Leads to Wider Optima and Better Generalization

14 code implementations14 Mar 2018 Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson

Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence.

Image Classification Stochastic Optimization

Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs

10 code implementations NeurIPS 2018 Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson

The loss functions of deep neural networks are complex and their geometric properties are not well understood.

Product Kernel Interpolation for Scalable Gaussian Processes

1 code implementation24 Feb 2018 Jacob R. Gardner, Geoff Pleiss, Ruihan Wu, Kilian Q. Weinberger, Andrew Gordon Wilson

Recent work shows that inference for Gaussian processes can be performed efficiently using iterative methods that rely only on matrix-vector multiplications (MVMs).

Gaussian Processes

Scalable Lévy Process Priors for Spectral Kernel Learning

1 code implementation2 Feb 2018 Phillip A. Jang, Andrew E. Loeb, Matthew B. Davidow, Andrew Gordon Wilson

We propose a distribution over kernels formed by modelling a spectral mixture density with a L\'evy process.

Gaussian Processes

Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning

no code implementations27 Nov 2017 Andrew Gordon Wilson, Jason Yosinski, Patrice Simard, Rich Caruana, William Herlands

This is the Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning, held in Long Beach, California, USA on December 7, 2017

Interpretable Machine Learning

Scalable Log Determinants for Gaussian Process Kernel Learning

3 code implementations NeurIPS 2017 Kun Dong, David Eriksson, Hannes Nickisch, David Bindel, Andrew Gordon Wilson

For applications as varied as Bayesian neural networks, determinantal point processes, elliptical graphical models, and kernel learning for Gaussian processes (GPs), one must compute a log determinant of an $n \times n$ positive definite matrix, and its derivatives - leading to prohibitive $\mathcal{O}(n^3)$ computations.

Gaussian Processes Point Processes

Bayesian GAN

4 code implementations NeurIPS 2017 Yunus Saatchi, Andrew Gordon Wilson

Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood.

Multimodal Word Distributions

2 code implementations ACL 2017 Ben Athiwaratkun, Andrew Gordon Wilson

Word embeddings provide point representations of words containing useful semantic information.

Word Embeddings Word Similarity

Bayesian Optimization with Gradients

1 code implementation NeurIPS 2017 Jian Wu, Matthias Poloczek, Andrew Gordon Wilson, Peter I. Frazier

Bayesian optimization has been successful at global optimization of expensive-to-evaluate multimodal objective functions.

Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems

no code implementations28 Nov 2016 Andrew Gordon Wilson, Been Kim, William Herlands

This is the Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems, held in Barcelona, Spain on December 9, 2016

Interpretable Machine Learning

Stochastic Variational Deep Kernel Learning

no code implementations NeurIPS 2016 Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing

We propose a novel deep kernel learning model and stochastic variational inference procedure which generalizes deep kernel learning approaches to enable classification, multi-task learning, additive covariance structures, and stochastic gradient training.

Gaussian Processes General Classification +2

Deep Kernel Learning

4 code implementations6 Nov 2015 Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing

We introduce scalable deep kernels, which combine the structural properties of deep learning architectures with the non-parametric flexibility of kernel methods.

Gaussian Processes

Thoughts on Massively Scalable Gaussian Processes

3 code implementations5 Nov 2015 Andrew Gordon Wilson, Christoph Dann, Hannes Nickisch

This multi-level circulant approximation allows one to unify the orthogonal computational benefits of fast Kronecker and Toeplitz approaches, and is significantly faster than either approach in isolation; 2) local kernel interpolation and inducing points to allow for arbitrarily located data inputs, and $O(1)$ test time predictions; 3) exploiting block-Toeplitz Toeplitz-block structure (BTTB), which enables fast inference and learning when multidimensional Kronecker structure is not present; and 4) projections of the input space to flexibly model correlated inputs and high dimensional data.

Gaussian Processes

The Human Kernel

no code implementations NeurIPS 2015 Andrew Gordon Wilson, Christoph Dann, Christopher G. Lucas, Eric P. Xing

Bayesian nonparametric models, such as Gaussian processes, provide a compelling framework for automatic statistical modelling: these models have a high degree of flexibility, and automatically calibrated complexity.

Gaussian Processes

Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)

1 code implementation3 Mar 2015 Andrew Gordon Wilson, Hannes Nickisch

We introduce a new structured kernel interpolation (SKI) framework, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs).

Gaussian Processes

A la Carte - Learning Fast Kernels

no code implementations19 Dec 2014 Zichao Yang, Alexander J. Smola, Le Song, Andrew Gordon Wilson

Kernel methods have great promise for learning rich statistical representations of large modern datasets.

Student-t Processes as Alternatives to Gaussian Processes

no code implementations18 Feb 2014 Amar Shah, Andrew Gordon Wilson, Zoubin Ghahramani

We investigate the Student-t process as an alternative to the Gaussian process as a nonparametric prior over functions.

Gaussian Processes Model Selection

Bayesian Inference for NMR Spectroscopy with Applications to Chemical Quantification

no code implementations14 Feb 2014 Andrew Gordon Wilson, Yuting Wu, Daniel J. Holland, Sebastian Nowozin, Mick D. Mantle, Lynn F. Gladden, Andrew Blake

Nuclear magnetic resonance (NMR) spectroscopy exploits the magnetic properties of atomic nuclei to discover the structure, reaction state and chemical environment of molecules.

Bayesian Inference

GPatt: Fast Multidimensional Pattern Extrapolation with Gaussian Processes

no code implementations20 Oct 2013 Andrew Gordon Wilson, Elad Gilboa, Arye Nehorai, John P. Cunningham

We introduce a new Bayesian nonparametric framework -- GPatt -- enabling automatic pattern extrapolation with Gaussian processes on large multidimensional datasets.

Gaussian Processes

Gaussian Process Kernels for Pattern Discovery and Extrapolation

1 code implementation18 Feb 2013 Andrew Gordon Wilson, Ryan Prescott Adams

Gaussian processes are rich distributions over functions, which provide a Bayesian nonparametric approach to smoothing and interpolation.

Gaussian Processes

Gaussian Process Regression Networks

1 code implementation19 Oct 2011 Andrew Gordon Wilson, David A. Knowles, Zoubin Ghahramani

We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the non-parametric flexibility of Gaussian processes.

Gaussian Processes

Generalised Wishart Processes

no code implementations31 Dec 2010 Andrew Gordon Wilson, Zoubin Ghahramani

We introduce a stochastic process with Wishart marginals: the generalised Wishart process (GWP).

Cannot find the paper you are looking for? You can Submit a new open access paper.