Search Results for author: Kirill Neklyudov

Found 15 papers, 11 papers with code

Involutive MCMC: One Way to Derive Them All

no code implementations ICML 2020 Kirill Neklyudov, Max Welling, Evgenii Egorov, Dmitry Vetrov

Markov Chain Monte Carlo (MCMC) is a computational approach to fundamental problems such as inference, integration, optimization, and simulation.

Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints

no code implementations28 Feb 2024 Lingkai Kong, Yuanqi Du, Wenhao Mu, Kirill Neklyudov, Valentin De Bortol, Haorui Wang, Dongxia Wu, Aaron Ferber, Yi-An Ma, Carla P. Gomes, Chao Zhang

To constrain the optimization process to the data manifold, we reformulate the original optimization problem as a sampling problem from the product of the Boltzmann distribution defined by the objective function and the data distribution learned by the diffusion model.

A Computational Framework for Solving Wasserstein Lagrangian Flows

1 code implementation16 Oct 2023 Kirill Neklyudov, Rob Brekelmans, Alexander Tong, Lazar Atanackovic, Qiang Liu, Alireza Makhzani

The dynamical formulation of the optimal transport can be extended through various choices of the underlying geometry ($\textit{kinetic energy}$), and the regularization of density paths ($\textit{potential energy}$).

Quantum HyperNetworks: Training Binary Neural Networks in Quantum Superposition

2 code implementations19 Jan 2023 Juan Carrasquilla, Mohamed Hibat-Allah, Estelle Inack, Alireza Makhzani, Kirill Neklyudov, Graham W. Taylor, Giacomo Torlai

Binary neural networks, i. e., neural networks whose parameters and activations are constrained to only two possible values, offer a compelling avenue for the deployment of deep learning models on energy- and memory-limited devices.

Combinatorial Optimization

Action Matching: Learning Stochastic Dynamics from Samples

1 code implementation13 Oct 2022 Kirill Neklyudov, Rob Brekelmans, Daniel Severo, Alireza Makhzani

Learning the continuous dynamics of a system from snapshots of its temporal marginals is a problem which appears throughout natural sciences and machine learning, including in quantum systems, single-cell biological data, and generative modeling.

Colorization Super-Resolution

Particle Dynamics for Learning EBMs

1 code implementation26 Nov 2021 Kirill Neklyudov, Priyank Jaini, Max Welling

We accomplish this by viewing the evolution of the modeling distribution as (i) the evolution of the energy function, and (ii) the evolution of the samples from this distribution along some vector field.

Deterministic Gibbs Sampling via Ordinary Differential Equations

1 code implementation18 Jun 2021 Kirill Neklyudov, Roberto Bondesan, Max Welling

Deterministic dynamics is an essential part of many MCMC algorithms, e. g.

Orbital MCMC

1 code implementation15 Oct 2020 Kirill Neklyudov, Max Welling

Markov Chain Monte Carlo (MCMC) algorithms ubiquitously employ complex deterministic transformations to generate proposal points that are then filtered by the Metropolis-Hastings-Green (MHG) test.

Involutive MCMC: a Unifying Framework

no code implementations30 Jun 2020 Kirill Neklyudov, Max Welling, Evgenii Egorov, Dmitry Vetrov

Markov Chain Monte Carlo (MCMC) is a computational approach to fundamental problems such as inference, integration, optimization, and simulation.

The Implicit Metropolis-Hastings Algorithm

1 code implementation NeurIPS 2019 Kirill Neklyudov, Evgenii Egorov, Dmitry Vetrov

For any implicit probabilistic model and a target distribution represented by a set of samples, implicit Metropolis-Hastings operates by learning a discriminator to estimate the density-ratio and then generating a chain of samples.

Image Generation

Metropolis-Hastings view on variational inference and adversarial training

no code implementations ICLR 2019 Kirill Neklyudov, Evgenii Egorov, Pavel Shvechikov, Dmitry Vetrov

From this point of view, the problem of constructing a sampler can be reduced to the question - how to choose a proposal for the MH algorithm?

Bayesian Inference Variational Inference

Variance Networks: When Expectation Does Not Meet Your Expectations

2 code implementations ICLR 2019 Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging.

Efficient Exploration Reinforcement Learning (RL)

Uncertainty Estimation via Stochastic Batch Normalization

1 code implementation13 Feb 2018 Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, Dmitry Vetrov

In this work, we investigate Batch Normalization technique and propose its probabilistic interpretation.

Structured Bayesian Pruning via Log-Normal Multiplicative Noise

5 code implementations NeurIPS 2017 Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e. g. removes neurons and/or convolutional channels in CNNs.

Cannot find the paper you are looking for? You can Submit a new open access paper.