Search Results for author: Rémi Flamary

Found 53 papers, 27 papers with code

Weakly supervised covariance matrices alignment through Stiefel matrices estimation for MEG applications

no code implementations24 Jan 2024 Antoine Collas, Rémi Flamary, Alexandre Gramfort

This paper introduces a novel domain adaptation technique for time series data, called Mixing model Stiefel Adaptation (MSA), specifically addressing the challenge of limited labeled signals in the target dataset.

Domain Adaptation Time Series

Interpolating between Clustering and Dimensionality Reduction with Gromov-Wasserstein

no code implementations5 Oct 2023 Hugues van Assel, Cédric Vincent-Cuaz, Titouan Vayer, Rémi Flamary, Nicolas Courty

We present a versatile adaptation of existing dimensionality reduction (DR) objectives, enabling the simultaneous reduction of both sample and feature sizes.

Clustering Dimensionality Reduction

Properties of Discrete Sliced Wasserstein Losses

no code implementations19 Jul 2023 Eloi Tanguy, Rémi Flamary, Julie Delon

We investigate the regularity and optimisation properties of this energy, as well as its Monte-Carlo approximation $\mathcal{E}_p$ (estimating the expectation in SW using only $p$ samples) and show convergence results on the critical points of $\mathcal{E}_p$ to those of $\mathcal{E}$, as well as an almost-sure uniform convergence and a uniform Central Limit result on the process $\mathcal{E}_p(Y)$.

Domain Adaptation

Convolutional Monge Mapping Normalization for learning on sleep data

1 code implementation30 May 2023 Théo Gnassounou, Rémi Flamary, Alexandre Gramfort

In many machine learning applications on signals and biomedical data, especially electroencephalogram (EEG), one major challenge is the variability of the data across subjects, sessions, and hardware devices.

EEG Test-time Adaptation

Entropic Wasserstein Component Analysis

1 code implementation9 Mar 2023 Antoine Collas, Titouan Vayer, Rémi Flamary, Arnaud Breloy

Dimension reduction (DR) methods provide systematic approaches for analyzing high-dimensional data.

Dimensionality Reduction

Template based Graph Neural Network with Optimal Transport Distances

1 code implementation31 May 2022 Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty

Current Graph Neural Networks (GNN) architectures generally rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling.

Graph Classification Graph Matching

Unbalanced CO-Optimal Transport

no code implementations30 May 2022 Quang Huy Tran, Hicham Janati, Nicolas Courty, Rémi Flamary, Ievgen Redko, Pinar Demetci, Ritambhara Singh

With this result in hand, we provide empirical evidence of this robustness for the challenging tasks of heterogeneous domain adaptation with and without varying proportions of classes and simultaneous alignment of samples and features across single-cell measurements.

Domain Adaptation

Learning to Predict Graphs with Fused Gromov-Wasserstein Barycenters

1 code implementation8 Feb 2022 Luc Brogat-Motte, Rémi Flamary, Céline Brouard, Juho Rousu, Florence d'Alché-Buc

This paper introduces a novel and generic framework to solve the flagship task of supervised labeled graph prediction by leveraging Optimal Transport tools.

regression

Semi-relaxed Gromov-Wasserstein divergence with applications on graphs

1 code implementation6 Oct 2021 Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty

To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects.

Dictionary Learning

Factored couplings in multi-marginal optimal transport via difference of convex programming

no code implementations1 Oct 2021 Quang Huy Tran, Hicham Janati, Ievgen Redko, Rémi Flamary, Nicolas Courty

Optimal transport (OT) theory underlies many emerging machine learning (ML) methods nowadays solving a wide range of tasks such as generative modeling, transfer learning and information retrieval.

Information Retrieval Retrieval +1

Semi-relaxed Gromov-Wasserstein divergence and applications on graphs

no code implementations ICLR 2022 Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty

To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects.

Dictionary Learning

Unbalanced Optimal Transport through Non-negative Penalized Linear Regression

1 code implementation NeurIPS 2021 Laetitia Chapel, Rémi Flamary, Haoran Wu, Cédric Févotte, Gilles Gasso

In particular, we consider majorization-minimization which leads in our setting to efficient multiplicative updates for a variety of penalties.

regression

Unbalanced minibatch Optimal Transport; applications to Domain Adaptation

2 code implementations5 Mar 2021 Kilian Fatras, Thibault Séjourné, Nicolas Courty, Rémi Flamary

Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions.

Domain Adaptation

Online Graph Dictionary Learning

1 code implementation12 Feb 2021 Cédric Vincent-Cuaz, Titouan Vayer, Rémi Flamary, Marco Corneli, Nicolas Courty

Dictionary learning is a key tool for representation learning, that explains the data as linear combination of few basic elements.

Dictionary Learning Graph Classification +2

Minibatch optimal transport distances; analysis and applications

2 code implementations5 Jan 2021 Kilian Fatras, Younes Zine, Szymon Majewski, Rémi Flamary, Rémi Gribonval, Nicolas Courty

We notably argue that the minibatch strategy comes with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with limits: the minibatch OT is not a distance.

Representation Transfer by Optimal Transport

no code implementations13 Jul 2020 Xuhong Li, Yves GRANDVALET, Rémi Flamary, Nicolas Courty, Dejing Dou

We use optimal transport to quantify the match between two representations, yielding a distance that embeds some invariances inherent to the representation of deep networks.

Knowledge Distillation Model Compression +1

Provably Convergent Working Set Algorithm for Non-Convex Regularized Regression

no code implementations24 Jun 2020 Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Joseph Salmon

Owing to their statistical properties, non-convex sparse regularizers have attracted much interest for estimating a sparse linear model from high dimensional data.

regression

Multi-source Domain Adaptation via Weighted Joint Distributions Optimal Transport

1 code implementation23 Jun 2020 Rosanna Turrisi, Rémi Flamary, Alain Rakotomamonjy, Massimiliano Pontil

The problem of domain adaptation on an unlabeled target dataset using knowledge from multiple labelled source datasets is becoming increasingly important.

Domain Adaptation

Optimal Transport for Conditional Domain Matching and Label Shift

1 code implementation15 Jun 2020 Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Mokhtar Z. Alaya, Maxime Berar, Nicolas Courty

We address the problem of unsupervised domain adaptation under the setting of generalized target shift (joint class-conditional and label shifts).

Unsupervised Domain Adaptation

CO-Optimal Transport

1 code implementation NeurIPS 2020 Ievgen Redko, Titouan Vayer, Rémi Flamary, Nicolas Courty

Optimal transport (OT) is a powerful geometric and probabilistic tool for finding correspondences and measuring similarity between two distributions.

Clustering Data Summarization +1

Learning with minibatch Wasserstein : asymptotic and gradient properties

3 code implementations9 Oct 2019 Kilian Fatras, Younes Zine, Rémi Flamary, Rémi Gribonval, Nicolas Courty

Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning.

Large scale Lasso with windowed active set for convolutional spike sorting

no code implementations28 Jun 2019 Laurent Dragoni, Rémi Flamary, Karim Lounici, Patricia Reynaud-Bouret

Spike sorting is a fundamental preprocessing step in neuroscience that is central to access simultaneous but distinct neuronal activities and therefore to better understand the animal or even human brain.

Spike Sorting

Concentration bounds for linear Monge mapping estimation and optimal transport domain adaptation

no code implementations24 May 2019 Rémi Flamary, Karim Lounici, André Ferrari

This article investigates the quality of the estimator of the linear Monge mapping between distributions.

Domain Adaptation

Sliced Gromov-Wasserstein

1 code implementation NeurIPS 2019 Titouan Vayer, Rémi Flamary, Romain Tavenard, Laetitia Chapel, Nicolas Courty

Recently used in various machine learning contexts, the Gromov-Wasserstein distance (GW) allows for comparing distributions whose supports do not necessarily lie in the same metric space.

Wasserstein Adversarial Regularization (WAR) on label noise

1 code implementation8 Apr 2019 Kilian Fatras, Bharath Bhushan Damodaran, Sylvain Lobry, Rémi Flamary, Devis Tuia, Nicolas Courty

Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or Web scraping.

Semantic Segmentation

Fused Gromov-Wasserstein distance for structured objects: theoretical foundations and mathematical properties

1 code implementation7 Nov 2018 Titouan Vayer, Laetita Chapel, Rémi Flamary, Romain Tavenard, Nicolas Courty

Optimal transport theory has recently found many applications in machine learning thanks to its capacity for comparing various machine learning objects considered as distributions.

BIG-bench Machine Learning

Optimal Transport for structured data with application on graphs

2 code implementations23 May 2018 Titouan Vayer, Laetitia Chapel, Rémi Flamary, Romain Tavenard, Nicolas Courty

This work considers the problem of computing distances between structured objects such as undirected graphs, seen as probability distributions in a specific metric space.

Clustering Graph Classification +2

DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation

4 code implementations ECCV 2018 Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, Nicolas Courty

In computer vision, one is often confronted with problems of domain shifts, which occur when one applies a classifier trained on a source dataset to target data sharing similar characteristics (e. g. same classes), but also different latent data structures (e. g. different acquisition conditions).

Unsupervised Domain Adaptation

Optimal Transport for Multi-source Domain Adaptation under Target Shift

3 code implementations13 Mar 2018 Ievgen Redko, Nicolas Courty, Rémi Flamary, Devis Tuia

In this paper, we propose to tackle the problem of reducing discrepancies between multiple domains referred to as multi-source domain adaptation and consider it under the target shift assumption: in all domains we aim to solve a classification problem with the same output classes, but with labels' proportions differing across them.

Domain Adaptation Image Segmentation +1

Distance Measure Machines

no code implementations1 Mar 2018 Alain Rakotomamonjy, Abraham Traoré, Maxime Berar, Rémi Flamary, Nicolas Courty

This paper presents a distance-based discriminative framework for learning with probability distributions.

On reducing the communication cost of the diffusion LMS algorithm

no code implementations30 Nov 2017 Ibrahim El Khalil Harrane, Rémi Flamary, Cédric Richard

While these data may be processed in a centralized manner, it is often more suitable to consider distributed strategies such as diffusion as they are scalable and can handle large amounts of data by distributing tasks over networked agents.

Large-Scale Optimal Transport and Mapping Estimation

2 code implementations7 Nov 2017 Vivien Seguy, Bharath Bhushan Damodaran, Rémi Flamary, Nicolas Courty, Antoine Rolet, Mathieu Blondel

We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT plan and Monge map between the underlying continuous measures.

Domain Adaptation

Learning Wasserstein Embeddings

1 code implementation ICLR 2018 Nicolas Courty, Rémi Flamary, Mélanie Ducoffe

Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity.

Dimensionality Reduction Domain Adaptation

Joint Distribution Optimal Transportation for Domain Adaptation

2 code implementations NeurIPS 2017 Nicolas Courty, Rémi Flamary, Amaury Habrard, Alain Rakotomamonjy

This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function $f$ in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known.

Unsupervised Domain Adaptation

Astronomical image reconstruction with convolutional neural networks

2 code implementations14 Dec 2016 Rémi Flamary

State of the art methods in astronomical image reconstruction rely on the resolution of a regularized or constrained optimization problem.

Astronomy Image Reconstruction

Mapping Estimation for Discrete Optimal Transport

no code implementations NeurIPS 2016 Michaël Perrot, Nicolas Courty, Rémi Flamary, Amaury Habrard

Most of the computational approaches of Optimal Transport use the Kantorovich relaxation of the problem to learn a probabilistic coupling $\mgamma$ but do not address the problem of learning the underlying transport map $\funcT$ linked to the original Monge problem.

Domain Adaptation

Optimal spectral transportation with application to music transcription

1 code implementation NeurIPS 2016 Rémi Flamary, Cédric Févotte, Nicolas Courty, Valentin Emiya

Many spectral unmixing methods rely on the non-negative decomposition of spectral data onto a dictionary of spectral templates.

Music Transcription

Wasserstein Discriminant Analysis

1 code implementation29 Aug 2016 Rémi Flamary, Marco Cuturi, Nicolas Courty, Alain Rakotomamonjy

Wasserstein Discriminant Analysis (WDA) is a new supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace.

Multiclass feature learning for hyperspectral image classification: sparse and hierarchical solutions

no code implementations23 Jun 2016 Devis Tuia, Rémi Flamary, Nicolas Courty

In this paper, we tackle the question of discovering an effective set of spatial filters to solve hyperspectral classification problems.

Classification General Classification +1

Importance sampling strategy for non-convex randomized block-coordinate descent

no code implementations23 Jun 2016 Rémi Flamary, Alain Rakotomamonjy, Gilles Gasso

As the number of samples and dimensionality of optimization problems related to statistics an machine learning explode, block coordinate descent algorithms have gained popularity since they reduce the original problem to several smaller ones.

Generalized conditional gradient: analysis of convergence and applications

no code implementations22 Oct 2015 Alain Rakotomamonjy, Rémi Flamary, Nicolas Courty

The objectives of this technical report is to provide additional results on the generalized conditional gradient methods introduced by Bredies et al. [BLM05].

Optimal Transport for Domain Adaptation

no code implementations2 Jul 2015 Nicolas Courty, Rémi Flamary, Devis Tuia, Alain Rakotomamonjy

Domain adaptation from one data space (or domain) to another is one of the most challenging tasks of modern data analytics.

Domain Adaptation

Mixed-norm Regularization for Brain Decoding

no code implementations14 Mar 2014 Rémi Flamary, Nisrine Jrad, Ronald Phlypo, Marco Congedo, Alain Rakotomamonjy

This framework is extended to the multi-task learning situation where several similar classification tasks related to different subjects are learned simultaneously.

Brain Decoding ERP +2

Decoding finger movements from ECoG signals using switching linear models

no code implementations Front. Neurosci., Sec. Neuroprosthetics 2012 Rémi Flamary, Alain Rakotomamonjy

As a witness of the BCI community increasing interest toward such a problem, the fourth BCI Competition provides a dataset which aim is to predict individual finger movements from ECoG signals.

Brain Decoding

Cannot find the paper you are looking for? You can Submit a new open access paper.