2 code implementations • 15 Dec 2022 • Lucas Beyer, Pavel Izmailov, Alexander Kolesnikov, Mathilde Caron, Simon Kornblith, Xiaohua Zhai, Matthias Minderer, Michael Tschannen, Ibrahim Alabdulmohsin, Filip Pavetic
Vision Transformers convert images to sequences by slicing them into patches.
1 code implementation • 20 Oct 2022 • Pavel Izmailov, Polina Kirichenko, Nate Gruver, Andrew Gordon Wilson
Deep classifiers are known to rely on spurious features $\unicode{x2013}$ patterns which are correlated with the target on the training data but not inherently relevant to the learning problem, such as the image backgrounds when classifying the foregrounds.
2 code implementations • 6 Apr 2022 • Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson
Neural network classifiers can largely rely on simple spurious features, such as backgrounds, to make predictions.
Ranked #1 on
Out-of-Distribution Generalization
on UrbanCars
1 code implementation • 30 Mar 2022 • Sanyam Kapoor, Wesley J. Maddox, Pavel Izmailov, Andrew Gordon Wilson
In Bayesian regression, we often use a Gaussian observation model, where we control the level of aleatoric uncertainty with a noise variance parameter.
1 code implementation • 23 Feb 2022 • Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Gordon Wilson
We provide a partial remedy through a conditional marginal likelihood, which we show is more aligned with generalization, and practically valuable for large-scale hyperparameter learning, such as in deep kernel learning.
1 code implementation • NeurIPS 2021 • Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, Andrew Gordon Wilson
Approximate Bayesian inference for neural networks is considered a robust alternative to standard training, often providing good performance on out-of-distribution data.
1 code implementation • NeurIPS 2021 • Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew Gordon Wilson
Knowledge distillation is a popular technique for training a small student network to emulate a larger teacher model, such as an ensemble of networks.
3 code implementations • 29 Apr 2021 • Pavel Izmailov, Sharad Vikram, Matthew D. Hoffman, Andrew Gordon Wilson
The posterior over Bayesian neural network (BNN) parameters is extremely high-dimensional and non-convex.
no code implementations • NeurIPS 2020 • Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson
Invariances to translations have imbued convolutional neural networks with powerful generalization properties.
1 code implementation • 22 Oct 2020 • Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson
Invariances to translations have imbued convolutional neural networks with powerful generalization properties.
1 code implementation • NeurIPS 2020 • Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson
Detecting out-of-distribution (OOD) data is crucial for robust machine learning systems.
2 code implementations • ICML 2020 • Marc Finzi, Samuel Stanton, Pavel Izmailov, Andrew Gordon Wilson
The translation equivariance of convolutional layers enables convolutional neural networks to generalize well on image problems.
1 code implementation • NeurIPS 2020 • Andrew Gordon Wilson, Pavel Izmailov
The key distinguishing property of a Bayesian approach is marginalization, rather than using a single setting of weights.
2 code implementations • ICML 2020 • Pavel Izmailov, Polina Kirichenko, Marc Finzi, Andrew Gordon Wilson
Normalizing flows transform a latent distribution through an invertible neural network for a flexible and pleasingly simple approach to generative modelling, while preserving an exact likelihood.
Semi-Supervised Image Classification
Semi-Supervised Text Classification
1 code implementation • 17 Jul 2019 • Pavel Izmailov, Wesley J. Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson
Bayesian inference was once a gold standard for learning with neural networks, providing accurate full predictive distributions and well calibrated uncertainty.
7 code implementations • NeurIPS 2019 • Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, Andrew Gordon Wilson
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning.
2 code implementations • ICLR 2019 • Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson
Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters.
14 code implementations • 14 Mar 2018 • Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson
Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence.
Ranked #73 on
Image Classification
on CIFAR-100
10 code implementations • NeurIPS 2018 • Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson
The loss functions of deep neural networks are complex and their geometric properties are not well understood.
2 code implementations • 5 Jan 2018 • Alexander Novikov, Pavel Izmailov, Valentin Khrulkov, Michael Figurnov, Ivan Oseledets
Tensor Train decomposition is used across many branches of machine learning.
Mathematical Software Numerical Analysis
1 code implementation • 19 Oct 2017 • Pavel Izmailov, Alexander Novikov, Dmitry Kropotov
We propose a method (TT-GP) for approximate inference in Gaussian Process (GP) models.
no code implementations • 18 Nov 2016 • Pavel Izmailov, Dmitry Kropotov
However, the new lower bound depends on $O(m^2)$ variational parameter, which makes optimization challenging in case of big m. In this work we develop a new approach for training inducing input GP models for classification problems.