no code implementations • 19 Mar 2024 • Nikita Kornilov, Alexander Gasnikov, Alexander Korotin
Over the several recent years, there has been a boom in development of flow matching methods for generative modeling.
no code implementations • 6 Feb 2024 • Alexander Kolesov, Petr Mokrov, Igor Udovichenko, Milena Gazdieva, Gudmund Pammer, Evgeny Burnaev, Alexander Korotin
Given a collection of probability measures, a practitioner sometimes needs to find an "average" distribution which adequately aggregates reference distributions.
1 code implementation • 5 Feb 2024 • Nikita Gushchin, Sergei Kholkin, Evgeny Burnaev, Alexander Korotin
It exploits the optimal parameterization of the diffusion process and provably recovers the SB process \textbf{(a)} with a single bridge matching step and \textbf{(b)} with arbitrary transport plan as the input.
1 code implementation • 2 Oct 2023 • Alexander Korotin, Nikita Gushchin, Evgeny Burnaev
Despite the recent advances in the field of computational Schr\"odinger Bridges (SB), most existing SB solvers are still heavy-weighted and require complex optimization of several neural networks.
no code implementations • 2 Oct 2023 • Alexander Kolesov, Petr Mokrov, Igor Udovichenko, Milena Gazdieva, Gudmund Pammer, Anastasis Kratsios, Evgeny Burnaev, Alexander Korotin
Optimal transport (OT) barycenters are a mathematically grounded way of averaging probability distributions while capturing their geometric properties.
1 code implementation • NeurIPS 2023 • Nikita Gushchin, Alexander Kolesov, Petr Mokrov, Polina Karpikova, Andrey Spiridonov, Evgeny Burnaev, Alexander Korotin
We fill this gap and propose a novel way to create pairs of probability distributions for which the ground truth OT solution is known by the construction.
1 code implementation • 12 Apr 2023 • Petr Mokrov, Alexander Korotin, Alexander Kolesov, Nikita Gushchin, Evgeny Burnaev
Energy-based models (EBMs) are known in the Machine Learning community for decades.
no code implementations • 14 Mar 2023 • Milena Gazdieva, Arip Asadulaev, Alexander Korotin, Evgeny Burnaev
We address this challenge and propose a novel theoretically-justified and lightweight unbalanced EOT solver.
no code implementations • 10 Mar 2023 • Maksim Nekrashevich, Alexander Korotin, Evgeny Burnaev
To demonstrate the effectiveness of our proposed method, we conduct experiments on the synthetic data and explore the practical applicability of our method to the popular task of the unsupervised alignment of word embeddings.
1 code implementation • NeurIPS 2023 • Nikita Gushchin, Alexander Kolesov, Alexander Korotin, Dmitry Vetrov, Evgeny Burnaev
We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between continuous probability distributions which are accessible by samples.
2 code implementations • 15 Jun 2022 • Alexander Korotin, Alexander Kolesov, Evgeny Burnaev
Despite the success of WGANs, it is still unclear how well the underlying OT dual solvers approximate the OT cost (Wasserstein-1 distance, $\mathbb{W}_{1}$) and the OT gradient needed to update the generator.
no code implementations • 30 May 2022 • Arip Asadulaev, Vitaly Shutov, Alexander Korotin, Alexander Panfilov, Andrey Filchenkov
In domain adaptation, the goal is to adapt a classifier trained on the source domain samples to the target domain.
no code implementations • 30 May 2022 • Arip Asadulaev, Alexander Korotin, Vage Egiazarian, Petr Mokrov, Evgeny Burnaev
We introduce a novel neural network-based algorithm to compute optimal transport (OT) plans for general cost functionals.
2 code implementations • 30 May 2022 • Alexander Korotin, Daniil Selikhanovych, Evgeny Burnaev
We study the Neural Optimal Transport (NOT) algorithm which uses the general optimal transport formulation and learns stochastic transport plans.
no code implementations • 2 Feb 2022 • Milena Gazdieva, Litu Rout, Alexander Korotin, Andrey Kravchenko, Alexander Filippov, Evgeny Burnaev
First, the learned SR map is always an optimal transport (OT) map.
3 code implementations • 28 Jan 2022 • Alexander Korotin, Daniil Selikhanovych, Evgeny Burnaev
We present a novel neural-networks-based algorithm to compute optimal transport maps and plans for strong and weak transport costs.
1 code implementation • 28 Jan 2022 • Alexander Korotin, Vage Egiazarian, Lingxiao Li, Evgeny Burnaev
Wasserstein barycenters have become popular due to their ability to represent the average of probability measures in a geometrically meaningful way.
2 code implementations • ICLR 2022 • Litu Rout, Alexander Korotin, Evgeny Burnaev
In particular, we consider denoising, colorization, and inpainting, where the optimality of the restoration map is a desired attribute, since the output (restored) image is expected to be close to the input (degraded) one.
no code implementations • 29 Sep 2021 • Arip Asadulaev, Vitaly Shutov, Alexander Korotin, Alexander Panfilov, Andrey Filchenkov
In our algorithm, instead of mapping from target to the source domain, optimal transport maps target samples to the set of adversarial examples.
2 code implementations • NeurIPS 2021 • Serguei Barannikov, Ilya Trofimov, Grigorii Sotnikov, Ekaterina Trimbach, Alexander Korotin, Alexander Filippov, Evgeny Burnaev
We develop a framework for comparing data manifolds, aimed, in particular, towards the evaluation of deep generative models.
6 code implementations • NeurIPS 2021 • Alexander Korotin, Lingxiao Li, Aude Genevay, Justin Solomon, Alexander Filippov, Evgeny Burnaev
Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance.
3 code implementations • NeurIPS 2021 • Petr Mokrov, Alexander Korotin, Lingxiao Li, Aude Genevay, Justin Solomon, Evgeny Burnaev
Specifically, Fokker-Planck equations, which model the diffusion of probability measures, can be understood as gradient descent over entropy functionals in Wasserstein space.
1 code implementation • NeurIPS 2021 • Serguei Barannikov, Ilya Trofimov, Grigorii Sotnikov, Ekaterina Trimbach, Alexander Korotin, Alexander Filippov, Evgeny Burnaev
We propose a framework for comparing data manifolds, aimed, in particular, towards the evaluation of deep generative models.
2 code implementations • ICLR 2021 • Alexander Korotin, Lingxiao Li, Justin Solomon, Evgeny Burnaev
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
no code implementations • 31 Dec 2020 • Serguei Barannikov, Daria Voronkova, Ilya Trofimov, Alexander Korotin, Grigorii Sotnikov, Evgeny Burnaev
We define the neural network Topological Obstructions score, "TO-score", with the help of robust topological invariants, barcodes of the loss function, that quantify the "badness" of local minima for gradient-based optimization.
no code implementations • 15 Dec 2019 • Alexander Korotin, Vladimir V'yugin, Evgeny Burnaev
In this paper we extend the setting of the online prediction with expert advice to function-valued forecasts.
no code implementations • 29 Nov 2019 • Serguei Barannikov, Alexander Korotin, Dmitry Oganesyan, Daniil Emtsev, Evgeny Burnaev
We apply the canonical forms (barcodes) of gradient Morse complexes to explore topology of loss surfaces.
4 code implementations • ICLR 2021 • Alexander Korotin, Vage Egiazarian, Arip Asadulaev, Alexander Safin, Evgeny Burnaev
We propose a novel end-to-end non-minimax algorithm for training optimal transport mappings for the quadratic cost (Wasserstein-2 distance).
no code implementations • 25 Sep 2019 • Serguei Barannikov, Alexander Korotin, Dmitry Oganesyan, Daniil Emtsev, Evgeny Burnaev
We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces.
no code implementations • 27 Feb 2019 • Alexander Korotin, Vladimir V'yugin, Evgeny Burnaev
The article is devoted to investigating the application of hedging strategies to online expert weight allocation under delayed feedback.
no code implementations • 18 Mar 2018 • Alexander Korotin, Vladimir V'yugin, Evgeny Burnaev
The first one is theoretically close to an optimal algorithm and is based on replication of independent copies.
no code implementations • 8 Nov 2017 • Alexander Korotin, Vladimir V'yugin, Evgeny Burnaev
In the first one, at each step $t$ the learner has to combine the point forecasts of the experts issued for the time interval $[t+1, t+d]$ ahead.
1 code implementation • 6 Jun 2017 • Smolyakov Dmitry, Alexander Korotin, Pavel Erofeev, Artem Papanov, Evgeny Burnaev
One possible approach to tackle the class imbalance in classification tasks is to resample a training dataset, i. e., to drop some of its elements or to synthesize new ones.