Search Results for author: Arsenii Ashukha

Found 12 papers, 8 papers with code

Automating Control of Overestimation Bias for Continuous Reinforcement Learning

no code implementations26 Oct 2021 Arsenii Kuznetsov, Alexander Grishin, Artem Tsypin, Arsenii Ashukha, Dmitry Vetrov

Bias correction techniques are used by most of the high-performing methods for off-policy reinforcement learning.

Continuous Control

Resolution-robust Large Mask Inpainting with Fourier Convolutions

2 code implementations15 Sep 2021 Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, Victor Lempitsky

We find that one of the main reasons for that is the lack of an effective receptive field in both the inpainting network and the loss function.

Image Inpainting LAMA

Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations

no code implementations15 Jun 2021 Arsenii Ashukha, Andrei Atanov, Dmitry Vetrov

Averaging predictions over a set of models -- an ensemble -- is widely used to improve predictive performance and uncertainty estimation of deep learning models.

Data Augmentation Image Retrieval +1

Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation

1 code implementation21 Feb 2020 Dmitry Molchanov, Alexander Lyzhov, Yuliya Molchanova, Arsenii Ashukha, Dmitry Vetrov

Test-time data augmentation$-$averaging the predictions of a machine learning model across multiple augmented samples of data$-$is a widely used technique that improves the predictive performance.

Data Augmentation Image Classification

The Deep Weight Prior

2 code implementations ICLR 2019 Andrei Atanov, Arsenii Ashukha, Kirill Struminsky, Dmitry Vetrov, Max Welling

Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution.

Bayesian Inference Variational Inference

Variance Networks: When Expectation Does Not Meet Your Expectations

2 code implementations ICLR 2019 Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging.

Efficient Exploration

Uncertainty Estimation via Stochastic Batch Normalization

no code implementations13 Feb 2018 Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, Dmitry Vetrov

In this work, we investigate Batch Normalization technique and propose its probabilistic interpretation.

Structured Bayesian Pruning via Log-Normal Multiplicative Noise

5 code implementations NeurIPS 2017 Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e. g. removes neurons and/or convolutional channels in CNNs.

Variational Dropout Sparsifies Deep Neural Networks

12 code implementations ICML 2017 Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout.

Sparse Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.