Search Results for author: Ievgen Redko

Found 25 papers, 11 papers with code

A Swiss Army Knife for Minimax Optimal Transport

1 code implementation ICML 2020 Sofien Dhouib, Ievgen Redko, Tanguy Kerdoncuff, Rémi Emonet, Marc Sebban

The Optimal transport (OT) problem and its associated Wasserstein distance have recently become a topic of great interest in the machine learning community.

Margin-aware Adversarial Domain Adaptation with Optimal Transport

1 code implementation ICML 2020 Sofien Dhouib, Ievgen Redko, Carole Lartizien

In this paper, we propose a new theoretical analysis of unsupervised domain adaptation that relates notions of large margin separation, adversarial learning and optimal transport.

Unsupervised Domain Adaptation

Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention

1 code implementation15 Feb 2024 Romain Ilbert, Ambroise Odonnat, Vasilii Feofanov, Aladin Virmaux, Giuseppe Paolo, Themis Palpanas, Ievgen Redko

Transformer-based architectures achieved breakthrough performance in natural language processing and computer vision, yet they remain inferior to simpler linear baselines in multivariate long-term forecasting.

Time Series Time Series Forecasting

Leveraging Gradients for Unsupervised Accuracy Estimation under Distribution Shift

no code implementations17 Jan 2024 Renchunzi Xie, Ambroise Odonnat, Vasilii Feofanov, Ievgen Redko, Jianfeng Zhang, Bo An

Our key idea is that the model should be adjusted with a higher magnitude of gradients when it does not generalize to the test dataset with a distribution shift.

Understanding deep neural networks through the lens of their non-linearity

no code implementations17 Oct 2023 Quentin Bouniot, Ievgen Redko, Anton Mallasto, Charlotte Laclau, Karol Arndt, Oliver Struckmeier, Markus Heinonen, Ville Kyrki, Samuel Kaski

The remarkable success of deep neural networks (DNN) is often attributed to their high expressive power and their ability to approximate functions of arbitrary complexity.

Revisiting invariances and introducing priors in Gromov-Wasserstein distances

1 code implementation19 Jul 2023 Pinar Demetci, Quang Huy Tran, Ievgen Redko, Ritambhara Singh

Gromov-Wasserstein distance has found many applications in machine learning due to its ability to compare measures across metric spaces and its invariance to isometric transformations.

Transfer Learning

Meta Optimal Transport

1 code implementation10 Jun 2022 Brandon Amos, samuel cohen, Giulia Luise, Ievgen Redko

We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT.

Unbalanced CO-Optimal Transport

no code implementations30 May 2022 Quang Huy Tran, Hicham Janati, Nicolas Courty, Rémi Flamary, Ievgen Redko, Pinar Demetci, Ritambhara Singh

With this result in hand, we provide empirical evidence of this robustness for the challenging tasks of heterogeneous domain adaptation with and without varying proportions of classes and simultaneous alignment of samples and features across single-cell measurements.

Domain Adaptation

Factored couplings in multi-marginal optimal transport via difference of convex programming

no code implementations1 Oct 2021 Quang Huy Tran, Hicham Janati, Ievgen Redko, Rémi Flamary, Nicolas Courty

Optimal transport (OT) theory underlies many emerging machine learning (ML) methods nowadays solving a wide range of tasks such as generative modeling, transfer learning and information retrieval.

Information Retrieval Retrieval +1

All of the Fairness for Edge Prediction with Optimal Transport

no code implementations30 Oct 2020 Charlotte Laclau, Ievgen Redko, Manvi Choudhary, Christine Largeron

Machine learning and data mining algorithms have been increasingly used recently to support decision-making systems in many areas of high societal importance such as healthcare, education, or security.

Attribute Decision Making +1

Deep Neural Networks Are Congestion Games: From Loss Landscape to Wardrop Equilibrium and Beyond

no code implementations21 Oct 2020 Nina Vesseron, Ievgen Redko, Charlotte Laclau

The theoretical analysis of deep neural networks (DNN) is arguably among the most challenging research directions in machine learning (ML) right now, as it requires from scientists to lay novel statistical learning foundations to explain their behaviour in practice.

Improving Few-Shot Learning through Multi-task Representation Learning Theory

1 code implementation5 Oct 2020 Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard

In this paper, we consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task.

Continual Learning Few-Shot Learning +2

Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms

no code implementations28 Sep 2020 Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard

To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of meta-learning theory into practice for the popular task of few-shot classification.

Few-Shot Learning Learning Theory

Rank-one partitioning: formalization, illustrative examples, and a new cluster enhancing strategy

no code implementations1 Sep 2020 Charlotte Laclau, Franck Iutzeler, Ievgen Redko

In this paper, we introduce and formalize a rank-one partitioning learning paradigm that unifies partitioning methods that proceed by summarizing a data set using a single vector that is further used to derive the final clustering partition.

Clustering Denoising

A survey on domain adaptation theory: learning bounds and theoretical guarantees

no code implementations24 Apr 2020 Ievgen Redko, Emilie Morvant, Amaury Habrard, Marc Sebban, Younès Bennani

Despite a large amount of different transfer learning scenarios, the main objective of this survey is to provide an overview of the state-of-the-art theoretical results in a specific, and arguably the most popular, sub-field of transfer learning, called domain adaptation.

BIG-bench Machine Learning Domain Adaptation +1

CO-Optimal Transport

1 code implementation NeurIPS 2020 Ievgen Redko, Titouan Vayer, Rémi Flamary, Nicolas Courty

Optimal transport (OT) is a powerful geometric and probabilistic tool for finding correspondences and measuring similarity between two distributions.

Clustering Data Summarization +1

Revisiting (\epsilon, \gamma, \tau)-similarity learning for domain adaptation

no code implementations NeurIPS 2018 Sofiane Dhouib, Ievgen Redko

Similarity learning is an active research area in machine learning that tackles the problem of finding a similarity function tailored to an observable data sample in order to achieve efficient classification.

Domain Adaptation General Classification

Feature Selection for Unsupervised Domain Adaptation using Optimal Transport

no code implementations28 Jun 2018 Léo Gautheron, Ievgen Redko, Carole Lartizien

In this paper, we propose a new feature selection method for unsupervised domain adaptation based on the emerging optimal transportation theory.

feature selection Unsupervised Domain Adaptation

Cross-lingual Document Retrieval using Regularized Wasserstein Distance

1 code implementation11 May 2018 Georgios Balikas, Charlotte Laclau, Ievgen Redko, Massih-Reza Amini

Many information retrieval algorithms rely on the notion of a good distance that allows to efficiently compare objects of different nature.

Information Retrieval Retrieval

Optimal Transport for Multi-source Domain Adaptation under Target Shift

3 code implementations13 Mar 2018 Ievgen Redko, Nicolas Courty, Rémi Flamary, Devis Tuia

In this paper, we propose to tackle the problem of reducing discrepancies between multiple domains referred to as multi-source domain adaptation and consider it under the target shift assumption: in all domains we aim to solve a classification problem with the same output classes, but with labels' proportions differing across them.

Domain Adaptation Image Segmentation +1

Co-clustering through Optimal Transport

no code implementations ICML 2017 Charlotte Laclau, Ievgen Redko, Basarab Matei, Younès Bennani, Vincent Brault

The proposed method uses the entropy regularized optimal transport between empirical measures defined on data instances and features in order to obtain an estimated joint probability density function represented by the optimal coupling matrix.

Clustering Variational Inference

Kernel Alignment for Unsupervised Transfer Learning

no code implementations20 Oct 2016 Ievgen Redko, Younès Bennani

The ability of a human being to extrapolate previously gained knowledge to other domains inspired a new family of methods in machine learning called transfer learning.

Transfer Learning

Theoretical Analysis of Domain Adaptation with Optimal Transport

no code implementations14 Oct 2016 Ievgen Redko, Amaury Habrard, Marc Sebban

Domain adaptation (DA) is an important and emerging field of machine learning that tackles the problem occurring when the distributions of training (source domain) and test (target domain) data are similar but different.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.