1 code implementation • ICML 2020 • Sofien Dhouib, Ievgen Redko, Tanguy Kerdoncuff, Rémi Emonet, Marc Sebban
The Optimal transport (OT) problem and its associated Wasserstein distance have recently become a topic of great interest in the machine learning community.
1 code implementation • ICML 2020 • Sofien Dhouib, Ievgen Redko, Carole Lartizien
In this paper, we propose a new theoretical analysis of unsupervised domain adaptation that relates notions of large margin separation, adversarial learning and optimal transport.
1 code implementation • 15 Feb 2024 • Romain Ilbert, Ambroise Odonnat, Vasilii Feofanov, Aladin Virmaux, Giuseppe Paolo, Themis Palpanas, Ievgen Redko
Transformer-based architectures achieved breakthrough performance in natural language processing and computer vision, yet they remain inferior to simpler linear baselines in multivariate long-term forecasting.
no code implementations • 17 Jan 2024 • Renchunzi Xie, Ambroise Odonnat, Vasilii Feofanov, Ievgen Redko, Jianfeng Zhang, Bo An
Our key idea is that the model should be adjusted with a higher magnitude of gradients when it does not generalize to the test dataset with a distribution shift.
1 code implementation • 23 Oct 2023 • Ambroise Odonnat, Vasilii Feofanov, Ievgen Redko
Self-training is a well-known approach for semi-supervised learning.
no code implementations • 17 Oct 2023 • Quentin Bouniot, Ievgen Redko, Anton Mallasto, Charlotte Laclau, Karol Arndt, Oliver Struckmeier, Markus Heinonen, Ville Kyrki, Samuel Kaski
The remarkable success of deep neural networks (DNN) is often attributed to their high expressive power and their ability to approximate functions of arbitrary complexity.
1 code implementation • 19 Jul 2023 • Pinar Demetci, Quang Huy Tran, Ievgen Redko, Ritambhara Singh
Gromov-Wasserstein distance has found many applications in machine learning due to its ability to compare measures across metric spaces and its invariance to isometric transformations.
1 code implementation • 12 May 2023 • Oliver Struckmeier, Ievgen Redko, Anton Mallasto, Karol Arndt, Markus Heinonen, Ville Kyrki
Optimal transport (OT) is a powerful geometric tool used to compare and align probability measures following the least effort principle.
1 code implementation • 10 Jun 2022 • Brandon Amos, samuel cohen, Giulia Luise, Ievgen Redko
We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT.
no code implementations • 30 May 2022 • Quang Huy Tran, Hicham Janati, Nicolas Courty, Rémi Flamary, Ievgen Redko, Pinar Demetci, Ritambhara Singh
With this result in hand, we provide empirical evidence of this robustness for the challenging tasks of heterogeneous domain adaptation with and without varying proportions of classes and simultaneous alignment of samples and features across single-cell measurements.
no code implementations • 1 Oct 2021 • Quang Huy Tran, Hicham Janati, Ievgen Redko, Rémi Flamary, Nicolas Courty
Optimal transport (OT) theory underlies many emerging machine learning (ML) methods nowadays solving a wide range of tasks such as generative modeling, transfer learning and information retrieval.
no code implementations • 30 Oct 2020 • Charlotte Laclau, Ievgen Redko, Manvi Choudhary, Christine Largeron
Machine learning and data mining algorithms have been increasingly used recently to support decision-making systems in many areas of high societal importance such as healthcare, education, or security.
no code implementations • 21 Oct 2020 • Nina Vesseron, Ievgen Redko, Charlotte Laclau
The theoretical analysis of deep neural networks (DNN) is arguably among the most challenging research directions in machine learning (ML) right now, as it requires from scientists to lay novel statistical learning foundations to explain their behaviour in practice.
1 code implementation • 5 Oct 2020 • Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task.
no code implementations • 28 Sep 2020 • Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard
To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of meta-learning theory into practice for the popular task of few-shot classification.
no code implementations • 1 Sep 2020 • Charlotte Laclau, Franck Iutzeler, Ievgen Redko
In this paper, we introduce and formalize a rank-one partitioning learning paradigm that unifies partitioning methods that proceed by summarizing a data set using a single vector that is further used to derive the final clustering partition.
no code implementations • 24 Apr 2020 • Ievgen Redko, Emilie Morvant, Amaury Habrard, Marc Sebban, Younès Bennani
Despite a large amount of different transfer learning scenarios, the main objective of this survey is to provide an overview of the state-of-the-art theoretical results in a specific, and arguably the most popular, sub-field of transfer learning, called domain adaptation.
1 code implementation • NeurIPS 2020 • Ievgen Redko, Titouan Vayer, Rémi Flamary, Nicolas Courty
Optimal transport (OT) is a powerful geometric and probabilistic tool for finding correspondences and measuring similarity between two distributions.
no code implementations • NeurIPS 2018 • Sofiane Dhouib, Ievgen Redko
Similarity learning is an active research area in machine learning that tackles the problem of finding a similarity function tailored to an observable data sample in order to achieve efficient classification.
no code implementations • 28 Jun 2018 • Léo Gautheron, Ievgen Redko, Carole Lartizien
In this paper, we propose a new feature selection method for unsupervised domain adaptation based on the emerging optimal transportation theory.
1 code implementation • 11 May 2018 • Georgios Balikas, Charlotte Laclau, Ievgen Redko, Massih-Reza Amini
Many information retrieval algorithms rely on the notion of a good distance that allows to efficiently compare objects of different nature.
3 code implementations • 13 Mar 2018 • Ievgen Redko, Nicolas Courty, Rémi Flamary, Devis Tuia
In this paper, we propose to tackle the problem of reducing discrepancies between multiple domains referred to as multi-source domain adaptation and consider it under the target shift assumption: in all domains we aim to solve a classification problem with the same output classes, but with labels' proportions differing across them.
no code implementations • ICML 2017 • Charlotte Laclau, Ievgen Redko, Basarab Matei, Younès Bennani, Vincent Brault
The proposed method uses the entropy regularized optimal transport between empirical measures defined on data instances and features in order to obtain an estimated joint probability density function represented by the optimal coupling matrix.
no code implementations • 20 Oct 2016 • Ievgen Redko, Younès Bennani
The ability of a human being to extrapolate previously gained knowledge to other domains inspired a new family of methods in machine learning called transfer learning.
no code implementations • 14 Oct 2016 • Ievgen Redko, Amaury Habrard, Marc Sebban
Domain adaptation (DA) is an important and emerging field of machine learning that tackles the problem occurring when the distributions of training (source domain) and test (target domain) data are similar but different.