Search Results for author: Timur Garipov

Found 10 papers, 8 papers with code

Adversarial Support Alignment

1 code implementation ICLR 2022 Shangyuan Tong, Timur Garipov, Yang Zhang, Shiyu Chang, Tommi S. Jaakkola

Furthermore, we show that our approach can be viewed as a limit of existing notions of alignment by increasing transportation assignment tolerance.

Domain Adaptation

The Benefits of Pairwise Discriminators for Adversarial Training

1 code implementation20 Feb 2020 Shangyuan Tong, Timur Garipov, Tommi Jaakkola

We provide sufficient conditions for local convergence; characterize the capacity balance that should guide the discriminator and generator choices; and construct examples of minimally sufficient discriminators.

Towards understanding the true loss surface of deep neural networks using random matrix theory and iterative spectral methods

no code implementations ICLR 2020 Diego Granziol, Timur Garipov, Dmitry Vetrov, Stefan Zohren, Stephen Roberts, Andrew Gordon Wilson

This approach is an order of magnitude faster than state-of-the-art methods for spectral visualization, and can be generically used to investigate the spectral properties of matrices in deep learning.

Deep Curvature Suite

1 code implementation20 Dec 2019 Diego Granziol, Xingchen Wan, Timur Garipov

We present MLRG Deep Curvature suite, a PyTorch-based, open-source package for analysis and visualisation of neural network curvature and loss landscape.

Misconceptions

Subspace Inference for Bayesian Deep Learning

1 code implementation17 Jul 2019 Pavel Izmailov, Wesley J. Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson

Bayesian inference was once a gold standard for learning with neural networks, providing accurate full predictive distributions and well calibrated uncertainty.

Bayesian Inference Image Classification +2

A Simple Baseline for Bayesian Uncertainty in Deep Learning

8 code implementations NeurIPS 2019 Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, Andrew Gordon Wilson

We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning.

Bayesian Inference Transfer Learning

Averaging Weights Leads to Wider Optima and Better Generalization

15 code implementations14 Mar 2018 Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson

Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence.

Ranked #78 on Image Classification on CIFAR-100 (using extra training data)

Image Classification Stochastic Optimization

Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs

10 code implementations NeurIPS 2018 Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson

The loss functions of deep neural networks are complex and their geometric properties are not well understood.

Ultimate tensorization: compressing convolutional and FC layers alike

2 code implementations10 Nov 2016 Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, Dmitry Vetrov

Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity.

Cannot find the paper you are looking for? You can Submit a new open access paper.