no code implementations • 17 Oct 2024 • Arip Asadulaev, Rostislav Korst, Alexander Korotin, Vage Egiazarian, Andrey Filchenkov, Evgeny Burnaev
We propose a novel algorithm for offline reinforcement learning using optimal transport.
no code implementations • 4 Oct 2024 • Milena Gazdieva, Jaemoo Choi, Alexander Kolesov, Jaewoong Choi, Petr Mokrov, Alexander Korotin
To the best of our knowledge, this paper is the first attempt to develop an algorithm for robust barycenters under the continuous distribution setup.
no code implementations • 3 Oct 2024 • Mikhail Persiianov, Arip Asadulaev, Nikita Andreev, Nikita Starodubcev, Dmitry Baranchuk, Anastasis Kratsios, Evgeny Burnaev, Alexander Korotin
To tackle this issue, we propose a new learning paradigm that integrates both paired and unpaired data $\textbf{seamlessly}$ through the data likelihood maximization techniques.
no code implementations • 3 Oct 2024 • Sergei Kholkin, Grigoriy Ksenofontov, David Li, Nikita Kornilov, Nikita Gushchin, Evgeny Burnaev, Alexander Korotin
The Iterative Markovian Fitting (IMF) procedure based on iterative reciprocal and Markovian projections has recently been proposed as a powerful method for solving the Schr\"odinger Bridge problem.
1 code implementation • 23 May 2024 • Nikita Gushchin, Daniil Selikhanovych, Sergei Kholkin, Evgeny Burnaev, Alexander Korotin
A promising recent approach to solve the SB problem is the Iterative Markovian Fitting (IMF) procedure, which alternates between Markovian and reciprocal projections of continuous-time stochastic processes.
1 code implementation • 19 Mar 2024 • Nikita Kornilov, Petr Mokrov, Alexander Gasnikov, Alexander Korotin
Over the several recent years, there has been a boom in development of Flow Matching (FM) methods for generative modeling.
1 code implementation • 6 Feb 2024 • Alexander Kolesov, Petr Mokrov, Igor Udovichenko, Milena Gazdieva, Gudmund Pammer, Evgeny Burnaev, Alexander Korotin
A theoretically appealing notion of such an average is the Wasserstein barycenter, which is the primal focus of our work.
1 code implementation • 5 Feb 2024 • Nikita Gushchin, Sergei Kholkin, Evgeny Burnaev, Alexander Korotin
It exploits the optimal parameterization of the diffusion process and provably recovers the SB process \textbf{(a)} with a single bridge matching step and \textbf{(b)} with arbitrary transport plan as the input.
1 code implementation • 2 Oct 2023 • Alexander Korotin, Nikita Gushchin, Evgeny Burnaev
Despite the recent advances in the field of computational Schr\"odinger Bridges (SB), most existing SB solvers are still heavy-weighted and require complex optimization of several neural networks.
no code implementations • 2 Oct 2023 • Alexander Kolesov, Petr Mokrov, Igor Udovichenko, Milena Gazdieva, Gudmund Pammer, Anastasis Kratsios, Evgeny Burnaev, Alexander Korotin
Optimal transport (OT) barycenters are a mathematically grounded way of averaging probability distributions while capturing their geometric properties.
1 code implementation • NeurIPS 2023 • Nikita Gushchin, Alexander Kolesov, Petr Mokrov, Polina Karpikova, Andrey Spiridonov, Evgeny Burnaev, Alexander Korotin
We fill this gap and propose a novel way to create pairs of probability distributions for which the ground truth OT solution is known by the construction.
1 code implementation • 12 Apr 2023 • Petr Mokrov, Alexander Korotin, Alexander Kolesov, Nikita Gushchin, Evgeny Burnaev
Energy-based models (EBMs) are known in the Machine Learning community for decades.
no code implementations • 14 Mar 2023 • Milena Gazdieva, Arip Asadulaev, Alexander Korotin, Evgeny Burnaev
We show that combined with a light parametrization recently proposed in the field our objective leads to a fast, simple, and effective solver which allows solving the continuous UEOT problem in minutes on CPU.
1 code implementation • 10 Mar 2023 • Xavier Aramayo Carrasco, Maksim Nekrashevich, Petr Mokrov, Evgeny Burnaev, Alexander Korotin
In the discrete variant of GWOT, the task is to learn an assignment between given discrete sets of points.
1 code implementation • NeurIPS 2023 • Nikita Gushchin, Alexander Kolesov, Alexander Korotin, Dmitry Vetrov, Evgeny Burnaev
We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between continuous probability distributions which are accessible by samples.
2 code implementations • 15 Jun 2022 • Alexander Korotin, Alexander Kolesov, Evgeny Burnaev
Despite the success of WGANs, it is still unclear how well the underlying OT dual solvers approximate the OT cost (Wasserstein-1 distance, $\mathbb{W}_{1}$) and the OT gradient needed to update the generator.
2 code implementations • 30 May 2022 • Alexander Korotin, Daniil Selikhanovych, Evgeny Burnaev
We study the Neural Optimal Transport (NOT) algorithm which uses the general optimal transport formulation and learns stochastic transport plans.
1 code implementation • 30 May 2022 • Arip Asadulaev, Alexander Korotin, Vage Egiazarian, Petr Mokrov, Evgeny Burnaev
We introduce a novel neural network-based algorithm to compute optimal transport (OT) plans for general cost functionals.
no code implementations • 30 May 2022 • Arip Asadulaev, Vitaly Shutov, Alexander Korotin, Alexander Panfilov, Andrey Filchenkov
In domain adaptation, the goal is to adapt a classifier trained on the source domain samples to the target domain.
no code implementations • 2 Feb 2022 • Milena Gazdieva, Litu Rout, Alexander Korotin, Andrey Kravchenko, Alexander Filippov, Evgeny Burnaev
First, the learned SR map is always an optimal transport (OT) map.
3 code implementations • 28 Jan 2022 • Alexander Korotin, Daniil Selikhanovych, Evgeny Burnaev
We present a novel neural-networks-based algorithm to compute optimal transport maps and plans for strong and weak transport costs.
1 code implementation • 28 Jan 2022 • Alexander Korotin, Vage Egiazarian, Lingxiao Li, Evgeny Burnaev
Wasserstein barycenters have become popular due to their ability to represent the average of probability measures in a geometrically meaningful way.
2 code implementations • ICLR 2022 • Litu Rout, Alexander Korotin, Evgeny Burnaev
In particular, we consider denoising, colorization, and inpainting, where the optimality of the restoration map is a desired attribute, since the output (restored) image is expected to be close to the input (degraded) one.
no code implementations • 29 Sep 2021 • Arip Asadulaev, Vitaly Shutov, Alexander Korotin, Alexander Panfilov, Andrey Filchenkov
In our algorithm, instead of mapping from target to the source domain, optimal transport maps target samples to the set of adversarial examples.
2 code implementations • NeurIPS 2021 • Serguei Barannikov, Ilya Trofimov, Grigorii Sotnikov, Ekaterina Trimbach, Alexander Korotin, Alexander Filippov, Evgeny Burnaev
We develop a framework for comparing data manifolds, aimed, in particular, towards the evaluation of deep generative models.
6 code implementations • NeurIPS 2021 • Alexander Korotin, Lingxiao Li, Aude Genevay, Justin Solomon, Alexander Filippov, Evgeny Burnaev
Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance.
3 code implementations • NeurIPS 2021 • Petr Mokrov, Alexander Korotin, Lingxiao Li, Aude Genevay, Justin Solomon, Evgeny Burnaev
Specifically, Fokker-Planck equations, which model the diffusion of probability measures, can be understood as gradient descent over entropy functionals in Wasserstein space.
1 code implementation • NeurIPS 2021 • Serguei Barannikov, Ilya Trofimov, Grigorii Sotnikov, Ekaterina Trimbach, Alexander Korotin, Alexander Filippov, Evgeny Burnaev
We propose a framework for comparing data manifolds, aimed, in particular, towards the evaluation of deep generative models.
2 code implementations • ICLR 2021 • Alexander Korotin, Lingxiao Li, Justin Solomon, Evgeny Burnaev
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
no code implementations • 31 Dec 2020 • Serguei Barannikov, Daria Voronkova, Ilya Trofimov, Alexander Korotin, Grigorii Sotnikov, Evgeny Burnaev
We define the neural network Topological Obstructions score, "TO-score", with the help of robust topological invariants, barcodes of the loss function, that quantify the "badness" of local minima for gradient-based optimization.
no code implementations • 15 Dec 2019 • Alexander Korotin, Vladimir V'yugin, Evgeny Burnaev
In this paper we extend the setting of the online prediction with expert advice to function-valued forecasts.
no code implementations • 29 Nov 2019 • Serguei Barannikov, Alexander Korotin, Dmitry Oganesyan, Daniil Emtsev, Evgeny Burnaev
We apply the canonical forms (barcodes) of gradient Morse complexes to explore topology of loss surfaces.
4 code implementations • ICLR 2021 • Alexander Korotin, Vage Egiazarian, Arip Asadulaev, Alexander Safin, Evgeny Burnaev
We propose a novel end-to-end non-minimax algorithm for training optimal transport mappings for the quadratic cost (Wasserstein-2 distance).
no code implementations • 25 Sep 2019 • Serguei Barannikov, Alexander Korotin, Dmitry Oganesyan, Daniil Emtsev, Evgeny Burnaev
We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces.
no code implementations • 27 Feb 2019 • Alexander Korotin, Vladimir V'yugin, Evgeny Burnaev
The article is devoted to investigating the application of hedging strategies to online expert weight allocation under delayed feedback.
no code implementations • 18 Mar 2018 • Alexander Korotin, Vladimir V'yugin, Evgeny Burnaev
The first one is theoretically close to an optimal algorithm and is based on replication of independent copies.
no code implementations • 8 Nov 2017 • Alexander Korotin, Vladimir V'yugin, Evgeny Burnaev
In the first one, at each step $t$ the learner has to combine the point forecasts of the experts issued for the time interval $[t+1, t+d]$ ahead.
1 code implementation • 6 Jun 2017 • Smolyakov Dmitry, Alexander Korotin, Pavel Erofeev, Artem Papanov, Evgeny Burnaev
One possible approach to tackle the class imbalance in classification tasks is to resample a training dataset, i. e., to drop some of its elements or to synthesize new ones.