Search Results for author: Dmitry Vetrov

Found 84 papers, 51 papers with code

Involutive MCMC: One Way to Derive Them All

no code implementations ICML 2020 Kirill Neklyudov, Max Welling, Evgenii Egorov, Dmitry Vetrov

Markov Chain Monte Carlo (MCMC) is a computational approach to fundamental problems such as inference, integration, optimization, and simulation.

Where Do Large Learning Rates Lead Us?

no code implementations29 Oct 2024 Ildus Sadrtdinov, Maxim Kodryan, Eduard Pokonechny, Ekaterina Lobacheva, Dmitry Vetrov

It is generally accepted that starting neural networks training with large learning rates (LRs) improves generalization.

Guide-and-Rescale: Self-Guidance Mechanism for Effective Tuning-Free Real Image Editing

1 code implementation2 Sep 2024 Vadim Titov, Madina Khalmatova, Alexandra Ivanova, Dmitry Vetrov, Aibek Alanov

In this work, we explore the self-guidance technique to preserve the overall structure of the input image and its local regions appearance that should not be edited.

Regularized Distribution Matching Distillation for One-step Unpaired Image-to-Image Translation

no code implementations20 Jun 2024 Denis Rakitin, Ivan Shchekotov, Dmitry Vetrov

Diffusion distillation methods aim to compress the diffusion models into efficient one-step generators while trying to preserve quality.

Image-to-Image Translation Translation

Improving GFlowNets with Monte Carlo Tree Search

no code implementations19 Jun 2024 Nikita Morozov, Daniil Tiapkin, Sergey Samsonov, Alexey Naumov, Dmitry Vetrov

Generative Flow Networks (GFlowNets) treat sampling from distributions over compositional discrete spaces as a sequential decision-making problem, training a stochastic policy to construct objects step by step.

Decision Making

The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing

1 code implementation CVPR 2024 Denis Bobkov, Vadim Titov, Aibek Alanov, Dmitry Vetrov

Our method is compared with state-of-the-art encoding approaches, demonstrating that our model excels in terms of reconstruction quality and is capable of editing even challenging out-of-domain examples.

Neural Flow Diffusion Models: Learnable Forward Process for Improved Diffusion Modelling

no code implementations19 Apr 2024 Grigory Bartosh, Dmitry Vetrov, Christian A. Naesseth

Conventional diffusion models typically relies on a fixed forward process, which implicitly defines complex marginal distributions over latent variables.

 Ranked #1 on Image Generation on ImageNet 64x64 (Bits per dim metric)

Image Generation

HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach

1 code implementation1 Apr 2024 Maxim Nikolaev, Mikhail Kuznetsov, Dmitry Vetrov, Aibek Alanov

Our paper addresses the complex task of transferring a hairstyle from a reference image to an input photo for virtual hair try-on.

Large Learning Rates Improve Generalization: But How Large Are We Talking About?

no code implementations19 Nov 2023 Ekaterina Lobacheva, Eduard Pockonechnyy, Maxim Kodryan, Dmitry Vetrov

Inspired by recent research that recommends starting neural networks training with large learning rates (LRs) to achieve the best generalization, we explore this hypothesis in detail.

Gradual Optimization Learning for Conformational Energy Minimization

1 code implementation5 Nov 2023 Artem Tsypin, Leonid Ugadiarov, Kuzma Khrabrov, Alexander Telepov, Egor Rumiantsev, Alexey Skrynnik, Aleksandr I. Panov, Dmitry Vetrov, Elena Tutubalina, Artur Kadurin

Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules using $50$x less additional data.

Drug Discovery

Generative Flow Networks as Entropy-Regularized RL

1 code implementation19 Oct 2023 Daniil Tiapkin, Nikita Morozov, Alexey Naumov, Dmitry Vetrov

We demonstrate how the task of learning a generative flow network can be efficiently redefined as an entropy-regularized RL problem with a specific reward and regularizer structure.

Neural Diffusion Models

no code implementations12 Oct 2023 Grigory Bartosh, Dmitry Vetrov, Christian A. Naesseth

In this paper, we present Neural Diffusion Models (NDMs), a generalization of conventional diffusion models that enables defining and learning time-dependent non-linear transformations of data.

Ranked #3 on Image Generation on ImageNet 64x64 (Bits per dim metric)

Image Generation

UnDiff: Unsupervised Voice Restoration with Unconditional Diffusion Model

1 code implementation1 Jun 2023 Anastasiia Iashchenko, Pavel Andreev, Ivan Shchekotov, Nicholas Babaev, Dmitry Vetrov

Being once trained for speech waveform generation in an unconditional manner, it can be adapted to different tasks including degradation inversion, neural vocoding, and source separation.

Bandwidth Extension

Differentiable Rendering with Reparameterized Volume Sampling

1 code implementation21 Feb 2023 Nikita Morozov, Denis Rakitin, Oleg Desheulin, Dmitry Vetrov, Kirill Struminsky

To generate a pixel of a novel view, it marches a ray through the pixel and computes a weighted sum of radiance emitted from a dense set of ray points.

Novel View Synthesis

Entropic Neural Optimal Transport via Diffusion Processes

1 code implementation NeurIPS 2023 Nikita Gushchin, Alexander Kolesov, Alexander Korotin, Dmitry Vetrov, Evgeny Burnaev

We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between continuous probability distributions which are accessible by samples.

HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks

1 code implementation17 Oct 2022 Aibek Alanov, Vadim Titov, Dmitry Vetrov

We apply this parameterization to the state-of-art domain adaptation methods and show that it has almost the same expressiveness as the full parameter space.

Universal Domain Adaptation

Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes

1 code implementation8 Sep 2022 Maxim Kodryan, Ekaterina Lobacheva, Maksim Nakhodnov, Dmitry Vetrov

In this work, we investigate the properties of training scale-invariant neural networks directly on the sphere using a fixed ELR.

FFC-SE: Fast Fourier Convolution for Speech Enhancement

1 code implementation6 Apr 2022 Ivan Shchekotov, Pavel Andreev, Oleg Ivanov, Aibek Alanov, Dmitry Vetrov

The FFC operator allows employing large receptive field operations within early layers of the neural network.

Speech Enhancement

HiFi++: a Unified Framework for Bandwidth Extension and Speech Enhancement

4 code implementations24 Mar 2022 Pavel Andreev, Aibek Alanov, Oleg Ivanov, Dmitry Vetrov

Generative adversarial networks have recently demonstrated outstanding performance in neural vocoding outperforming best autoregressive and flow-based models.

Audio Generation Bandwidth Extension

Machine Learning Methods for Spectral Efficiency Prediction in Massive MIMO Systems

no code implementations29 Dec 2021 Evgeny Bobrov, Sergey Troshin, Nadezhda Chirkova, Ekaterina Lobacheva, Sviatoslav Panchenko, Dmitry Vetrov, Dmitry Kropotov

Channel decoding, channel detection, channel assessment, and resource management for wireless multiple-input multiple-output (MIMO) systems are all examples of problems where machine learning (ML) can be successfully applied.

BIG-bench Machine Learning Management

Variational Autoencoders for Precoding Matrices with High Spectral Efficiency

no code implementations23 Nov 2021 Evgeny Bobrov, Alexander Markov, Sviatoslav Panchenko, Dmitry Vetrov

In this paper, we consider the problem of finding precoding matrices with high spectral efficiency (SE) using variational autoencoder (VAE).

Management Vocal Bursts Intensity Prediction

Automating Control of Overestimation Bias for Reinforcement Learning

no code implementations26 Oct 2021 Arsenii Kuznetsov, Alexander Grishin, Artem Tsypin, Arsenii Ashukha, Artur Kadurin, Dmitry Vetrov

Overestimation bias control techniques are used by the majority of high-performing off-policy reinforcement learning algorithms.

Continuous Control Q-Learning +3

Quantization of Generative Adversarial Networks for Efficient Inference: a Methodological Study

no code implementations31 Aug 2021 Pavel Andreev, Alexander Fritzler, Dmitry Vetrov

While quantization is well established for discriminative models, the performance of modern quantization techniques in application to GANs remains unclear.

Neural Network Compression Quantization

Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations

no code implementations15 Jun 2021 Arsenii Ashukha, Andrei Atanov, Dmitry Vetrov

Averaging predictions over a set of models -- an ensemble -- is widely used to improve predictive performance and uncertainty estimation of deep learning models.

Data Augmentation Image Retrieval +3

Towards Practical Credit Assignment for Deep Reinforcement Learning

no code implementations8 Jun 2021 Vyacheslav Alipov, Riley Simmons-Edler, Nikita Putintsev, Pavel Kalinin, Dmitry Vetrov

Credit assignment is a fundamental problem in reinforcement learning, the problem of measuring an action's influence on future rewards.

Atari Games reinforcement-learning +2

On Power Laws in Deep Ensembles

1 code implementation NeurIPS 2020 Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan, Dmitry Vetrov

Ensembles of deep neural networks are known to achieve state-of-the-art performance in uncertainty estimation and lead to accuracy improvement.

Involutive MCMC: a Unifying Framework

no code implementations30 Jun 2020 Kirill Neklyudov, Max Welling, Evgenii Egorov, Dmitry Vetrov

Markov Chain Monte Carlo (MCMC) is a computational approach to fundamental problems such as inference, integration, optimization, and simulation.

MARS: Masked Automatic Ranks Selection in Tensor Decompositions

1 code implementation18 Jun 2020 Maxim Kodryan, Dmitry Kropotov, Dmitry Vetrov

Tensor decomposition methods have proven effective in various applications, including compression and acceleration of neural networks.

Tensor Decomposition

Deep Ensembles on a Fixed Memory Budget: One Wide Network or Several Thinner Ones?

no code implementations14 May 2020 Nadezhda Chirkova, Ekaterina Lobacheva, Dmitry Vetrov

In this work, we consider a fixed memory budget setting, and investigate, what is more effective: to train a single wide network, or to perform a memory split -- to train an ensemble of several thinner networks, with the same total number of parameters?

Deterministic Decoding for Discrete Data in Variational Autoencoders

1 code implementation4 Mar 2020 Daniil Polykovskiy, Dmitry Vetrov

Variational autoencoders are prominent generative models for modeling discrete data.

Decoder

Stochasticity in Neural ODEs: An Empirical Study

1 code implementation ICLR Workshop DeepDiffEq 2019 Viktor Oganesyan, Alexandra Volokhova, Dmitry Vetrov

Stochastic regularization of neural networks (e. g. dropout) is a wide-spread technique in deep learning that allows for better generalization.

Data Augmentation Image Classification

Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation

1 code implementation21 Feb 2020 Dmitry Molchanov, Alexander Lyzhov, Yuliya Molchanova, Arsenii Ashukha, Dmitry Vetrov

Test-time data augmentation$-$averaging the predictions of a machine learning model across multiple augmented samples of data$-$is a widely used technique that improves the predictive performance.

Data Augmentation Image Classification

Towards understanding the true loss surface of deep neural networks using random matrix theory and iterative spectral methods

no code implementations ICLR 2020 Diego Granziol, Timur Garipov, Dmitry Vetrov, Stefan Zohren, Stephen Roberts, Andrew Gordon Wilson

This approach is an order of magnitude faster than state-of-the-art methods for spectral visualization, and can be generically used to investigate the spectral properties of matrices in deep learning.

Low-variance Black-box Gradient Estimates for the Plackett-Luce Distribution

1 code implementation22 Nov 2019 Artyom Gadetsky, Kirill Struminsky, Christopher Robinson, Novi Quadrianto, Dmitry Vetrov

Learning models with discrete latent variables using stochastic gradient descent remains a challenge due to the high variance of gradient estimates.

Structured Sparsification of Gated Recurrent Neural Networks

no code implementations13 Nov 2019 Ekaterina Lobacheva, Nadezhda Chirkova, Alexander Markovich, Dmitry Vetrov

Recently, a lot of techniques were developed to sparsify the weights of neural networks and to remove networks' structure units, e. g. neurons.

Language Modelling text-classification +1

A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models

1 code implementation NeurIPS 2019 Maksim Kuznetsov, Daniil Polykovskiy, Dmitry Vetrov, Alexander Zhebrak

Previous works show that the richer family of prior distributions may help to avoid the mode collapse problem in GANs and to improve the evidence lower bound in VAEs.

Audio Synthesis

Subspace Inference for Bayesian Deep Learning

1 code implementation17 Jul 2019 Pavel Izmailov, Wesley J. Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson

Bayesian inference was once a gold standard for learning with neural networks, providing accurate full predictive distributions and well calibrated uncertainty.

Bayesian Inference Deep Learning +3

The Implicit Metropolis-Hastings Algorithm

1 code implementation NeurIPS 2019 Kirill Neklyudov, Evgenii Egorov, Dmitry Vetrov

For any implicit probabilistic model and a target distribution represented by a set of samples, implicit Metropolis-Hastings operates by learning a discriminator to estimate the density-ratio and then generating a chain of samples.

Image Generation

Importance Weighted Hierarchical Variational Inference

1 code implementation NeurIPS 2019 Artem Sobolev, Dmitry Vetrov

Variational Inference is a powerful tool in the Bayesian modeling toolkit, however, its effectiveness is determined by the expressivity of the utilized variational distributions in terms of their ability to match the true posterior distribution.

Variational Inference

User-Controllable Multi-Texture Synthesis with Generative Adversarial Networks

no code implementations9 Apr 2019 Aibek Alanov, Max Kochurov, Denis Volkhonskiy, Daniil Yashkov, Evgeny Burnaev, Dmitry Vetrov

We propose a novel multi-texture synthesis model based on generative adversarial networks (GANs) with a user-controllable mechanism.

Descriptive Texture Synthesis

A Simple Baseline for Bayesian Uncertainty in Deep Learning

9 code implementations NeurIPS 2019 Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, Andrew Gordon Wilson

We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning.

Bayesian Inference Deep Learning +2

Bayesian Sparsification of Gated Recurrent Neural Networks

1 code implementation NIPS Workshop CDNNRIA 2018 Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov

Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e. g. neurons.

ReSet: Learning Recurrent Dynamic Routing in ResNet-like Neural Networks

no code implementations11 Nov 2018 Iurii Kemaev, Daniil Polykovskiy, Dmitry Vetrov

Neural Network is a powerful Machine Learning tool that shows outstanding performance in Computer Vision, Natural Language Processing, and Artificial Intelligence.

Image Classification

Variational Dropout via Empirical Bayes

1 code implementation1 Nov 2018 Valery Kharitonov, Dmitry Molchanov, Dmitry Vetrov

We study the Automatic Relevance Determination procedure applied to deep neural networks.

Bayesian Compression for Natural Language Processing

3 code implementations EMNLP 2018 Nadezhda Chirkova, Ekaterina Lobacheva, Dmitry Vetrov

In natural language processing, a lot of the tasks are successfully solved with recurrent neural networks, but such models have a huge number of parameters.

Metropolis-Hastings view on variational inference and adversarial training

no code implementations ICLR 2019 Kirill Neklyudov, Evgenii Egorov, Pavel Shvechikov, Dmitry Vetrov

From this point of view, the problem of constructing a sampler can be reduced to the question - how to choose a proposal for the MH algorithm?

Bayesian Inference Variational Inference

The Deep Weight Prior

2 code implementations ICLR 2019 Andrei Atanov, Arsenii Ashukha, Kirill Struminsky, Dmitry Vetrov, Max Welling

Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution.

Bayesian Inference Variational Inference

Pairwise Augmented GANs with Adversarial Reconstruction Loss

no code implementations ICLR 2019 Aibek Alanov, Max Kochurov, Daniil Yashkov, Dmitry Vetrov

We experimentally demonstrate that our model generates samples and reconstructions of quality competitive with state-of-the-art on datasets MNIST, CIFAR10, CelebA and achieves good quantitative results on CIFAR10.

Doubly Semi-Implicit Variational Inference

no code implementations5 Oct 2018 Dmitry Molchanov, Valery Kharitonov, Artem Sobolev, Dmitry Vetrov

Unlike discriminator-based and kernel-based approaches to implicit variational inference, DSIVI optimizes a proper lower bound on ELBO that is asymptotically exact.

Variational Inference

Conditional Generators of Words Definitions

1 code implementation ACL 2018 Artyom Gadetsky, Ilya Yakubovskiy, Dmitry Vetrov

We explore recently introduced definition modeling technique that provided the tool for evaluation of different distributed vector representations of words through modeling dictionary definitions of words.

Variational Autoencoder with Arbitrary Conditioning

3 code implementations ICLR 2019 Oleg Ivanov, Michael Figurnov, Dmitry Vetrov

We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot".

Diversity Image Inpainting +1

Averaging Weights Leads to Wider Optima and Better Generalization

17 code implementations14 Mar 2018 Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson

Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence.

Ranked #76 on Image Classification on CIFAR-100 (using extra training data)

Image Classification Stochastic Optimization

Variance Networks: When Expectation Does Not Meet Your Expectations

2 code implementations ICLR 2019 Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging.

Efficient Exploration Reinforcement Learning +1

Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs

11 code implementations NeurIPS 2018 Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson

The loss functions of deep neural networks are complex and their geometric properties are not well understood.

Uncertainty Estimation via Stochastic Batch Normalization

1 code implementation13 Feb 2018 Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, Dmitry Vetrov

In this work, we investigate Batch Normalization technique and propose its probabilistic interpretation.

Probabilistic Adaptive Computation Time

no code implementations1 Dec 2017 Michael Figurnov, Artem Sobolev, Dmitry Vetrov

We present a probabilistic model with discrete latent variables that control the computation time in deep learning models such as ResNets and LSTMs.

Bayesian Sparsification of Recurrent Neural Networks

2 code implementations31 Jul 2017 Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov

Recurrent neural networks show state-of-the-art results in many text analysis tasks but often require a lot of memory to store their weights.

Language Modelling Sentiment Analysis

Structured Bayesian Pruning via Log-Normal Multiplicative Noise

5 code implementations NeurIPS 2017 Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e. g. removes neurons and/or convolutional channels in CNNs.

Variational Dropout Sparsifies Deep Neural Networks

15 code implementations ICML 2017 Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout.

Sparse Learning

Spatially Adaptive Computation Time for Residual Networks

1 code implementation CVPR 2017 Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, Ruslan Salakhutdinov

This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image.

Classification Computational Efficiency +7

Robust Variational Inference

no code implementations28 Nov 2016 Michael Figurnov, Kirill Struminsky, Dmitry Vetrov

Variational inference is a powerful tool for approximate inference.

Variational Inference

Ultimate tensorization: compressing convolutional and FC layers alike

2 code implementations10 Nov 2016 Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, Dmitry Vetrov

Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity.

PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions

2 code implementations NeurIPS 2016 Michael Figurnov, Aijan Ibraimova, Dmitry Vetrov, Pushmeet Kohli

We propose a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones.

Breaking Sticks and Ambiguities with Adaptive Skip-gram

3 code implementations25 Feb 2015 Sergey Bartunov, Dmitry Kondrashkin, Anton Osokin, Dmitry Vetrov

Recently proposed Skip-gram model is a powerful method for learning high-dimensional word representations that capture rich semantic relationships between words.

Word Sense Induction

Submodular relaxation for inference in Markov random fields

1 code implementation15 Jan 2015 Anton Osokin, Dmitry Vetrov

In this paper we address the problem of finding the most probable state of a discrete Markov random field (MRF), also known as the MRF energy minimization problem.

Multi-utility Learning: Structured-output Learning with Multiple Annotation-specific Loss Functions

no code implementations23 Jun 2014 Roman Shapovalov, Dmitry Vetrov, Anton Osokin, Pushmeet Kohli

Structured-output learning is a challenging problem; particularly so because of the difficulty in obtaining large datasets of fully labelled instances for training.

Image Segmentation Segmentation +2

Spatial Inference Machines

no code implementations CVPR 2013 Roman Shapovalov, Dmitry Vetrov, Pushmeet Kohli

Experimental results show that the spatial dependencies learned by our method significantly improve the accuracy of segmentation.

Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.