Search Results for author: Kirill Neklyudov

Found 20 papers, 14 papers with code

Involutive MCMC: One Way to Derive Them All

no code implementations ICML 2020 Kirill Neklyudov, Max Welling, Evgenii Egorov, Dmitry Vetrov

Markov Chain Monte Carlo (MCMC) is a computational approach to fundamental problems such as inference, integration, optimization, and simulation.

All

Feynman-Kac Correctors in Diffusion: Annealing, Guidance, and Product of Experts

1 code implementation4 Mar 2025 Marta Skreta, Tara Akhound-Sadegh, Viktor Ohanesian, Roberto Bondesan, Alán Aspuru-Guzik, Arnaud Doucet, Rob Brekelmans, Alexander Tong, Kirill Neklyudov

While score-based generative models are the model of choice across diverse domains, there are limited tools available for controlling inference-time behavior in a principled manner, e. g. for composing multiple pretrained models.

Text-to-Image Generation

The Superposition of Diffusion Models Using the Itô Density Estimator

no code implementations23 Dec 2024 Marta Skreta, Lazar Atanackovic, Avishek Joey Bose, Alexander Tong, Kirill Neklyudov

The Cambrian explosion of easily accessible pre-trained diffusion models suggests a demand for methods that combine multiple different pre-trained diffusion models without incurring the significant computational burden of re-training a larger combined model.

Doob's Lagrangian: A Sample-Efficient Variational Approach to Transition Path Sampling

1 code implementation10 Oct 2024 Yuanqi Du, Michael Plainer, Rob Brekelmans, Chenru Duan, Frank Noé, Carla P. Gomes, Alán Aspuru-Guzik, Kirill Neklyudov

Rare event sampling in dynamical systems is a fundamental problem arising in the natural sciences, which poses significant computational challenges due to an exponentially large space of trajectories.

Protein Folding

Meta Flow Matching: Integrating Vector Fields on the Wasserstein Manifold

no code implementations26 Aug 2024 Lazar Atanackovic, Xi Zhang, Brandon Amos, Mathieu Blanchette, Leo J. Lee, Yoshua Bengio, Alexander Tong, Kirill Neklyudov

Flow-based models allow for learning these dynamics at the population level - they model the evolution of the entire distribution of samples.

Graph Neural Network

Efficient Evolutionary Search Over Chemical Space with Large Language Models

1 code implementation23 Jun 2024 Haorui Wang, Marta Skreta, Cher-Tian Ser, Wenhao Gao, Lingkai Kong, Felix Strieth-Kalthoff, Chenru Duan, Yuchen Zhuang, Yue Yu, Yanqiao Zhu, Yuanqi Du, Alán Aspuru-Guzik, Kirill Neklyudov, Chao Zhang

Molecular discovery, when formulated as an optimization problem, presents significant computational challenges because optimization objectives can be non-differentiable.

Drug Design Evolutionary Algorithms

Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints

no code implementations28 Feb 2024 Lingkai Kong, Yuanqi Du, Wenhao Mu, Kirill Neklyudov, Valentin De Bortoli, Dongxia Wu, Haorui Wang, Aaron Ferber, Yi-An Ma, Carla P. Gomes, Chao Zhang

To constrain the optimization process to the data manifold, we reformulate the original optimization problem as a sampling problem from the product of the Boltzmann distribution defined by the objective function and the data distribution learned by the diffusion model.

A Computational Framework for Solving Wasserstein Lagrangian Flows

1 code implementation16 Oct 2023 Kirill Neklyudov, Rob Brekelmans, Alexander Tong, Lazar Atanackovic, Qiang Liu, Alireza Makhzani

The dynamical formulation of the optimal transport can be extended through various choices of the underlying geometry (kinetic energy), and the regularization of density paths (potential energy).

Quantum HyperNetworks: Training Binary Neural Networks in Quantum Superposition

2 code implementations19 Jan 2023 Juan Carrasquilla, Mohamed Hibat-Allah, Estelle Inack, Alireza Makhzani, Kirill Neklyudov, Graham W. Taylor, Giacomo Torlai

Binary neural networks, i. e., neural networks whose parameters and activations are constrained to only two possible values, offer a compelling avenue for the deployment of deep learning models on energy- and memory-limited devices.

Combinatorial Optimization

Action Matching: Learning Stochastic Dynamics from Samples

1 code implementation13 Oct 2022 Kirill Neklyudov, Rob Brekelmans, Daniel Severo, Alireza Makhzani

Learning the continuous dynamics of a system from snapshots of its temporal marginals is a problem which appears throughout natural sciences and machine learning, including in quantum systems, single-cell biological data, and generative modeling.

Colorization Super-Resolution

Particle Dynamics for Learning EBMs

1 code implementation26 Nov 2021 Kirill Neklyudov, Priyank Jaini, Max Welling

We accomplish this by viewing the evolution of the modeling distribution as (i) the evolution of the energy function, and (ii) the evolution of the samples from this distribution along some vector field.

Deterministic Gibbs Sampling via Ordinary Differential Equations

1 code implementation18 Jun 2021 Kirill Neklyudov, Roberto Bondesan, Max Welling

Deterministic dynamics is an essential part of many MCMC algorithms, e. g.

Orbital MCMC

1 code implementation15 Oct 2020 Kirill Neklyudov, Max Welling

Markov Chain Monte Carlo (MCMC) algorithms ubiquitously employ complex deterministic transformations to generate proposal points that are then filtered by the Metropolis-Hastings-Green (MHG) test.

Involutive MCMC: a Unifying Framework

no code implementations30 Jun 2020 Kirill Neklyudov, Max Welling, Evgenii Egorov, Dmitry Vetrov

Markov Chain Monte Carlo (MCMC) is a computational approach to fundamental problems such as inference, integration, optimization, and simulation.

The Implicit Metropolis-Hastings Algorithm

1 code implementation NeurIPS 2019 Kirill Neklyudov, Evgenii Egorov, Dmitry Vetrov

For any implicit probabilistic model and a target distribution represented by a set of samples, implicit Metropolis-Hastings operates by learning a discriminator to estimate the density-ratio and then generating a chain of samples.

Image Generation

Metropolis-Hastings view on variational inference and adversarial training

no code implementations ICLR 2019 Kirill Neklyudov, Evgenii Egorov, Pavel Shvechikov, Dmitry Vetrov

From this point of view, the problem of constructing a sampler can be reduced to the question - how to choose a proposal for the MH algorithm?

Bayesian Inference Variational Inference

Variance Networks: When Expectation Does Not Meet Your Expectations

2 code implementations ICLR 2019 Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging.

Efficient Exploration Reinforcement Learning +1

Uncertainty Estimation via Stochastic Batch Normalization

1 code implementation13 Feb 2018 Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, Dmitry Vetrov

In this work, we investigate Batch Normalization technique and propose its probabilistic interpretation.

Structured Bayesian Pruning via Log-Normal Multiplicative Noise

5 code implementations NeurIPS 2017 Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov

In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e. g. removes neurons and/or convolutional channels in CNNs.

Cannot find the paper you are looking for? You can Submit a new open access paper.