1 code implementation • 13 Jun 2024 • Thibaut Issenhuth, Sangchul Lee, Ludovic Dos Santos, Jean-Yves Franceschi, Chansoo Kim, Alain Rakotomamonjy
The former relies on the true velocity field of the corresponding differential equation, approximated by a pre-trained neural network.
no code implementations • 4 Apr 2024 • Mokhtar Z. Alaya, Alain Rakotomamonjy, Maxime Berar, Gilles Gasso
We particularly focus on the Gaussian smoothed sliced Wasserstein distance and prove that it converges with a rate $O(n^{-1/2})$.
1 code implementation • 13 Dec 2023 • Ilana Sebag, Muni Sreenivas Pydi, Jean-Yves Franceschi, Alain Rakotomamonjy, Mike Gartrell, Jamal Atif, Alexandre Allauzen
In this paper, we introduce a novel differentially private generative modeling approach based on a gradient flow in the space of probability measures.
no code implementations • 3 Oct 2023 • Alain Rakotomamonjy, Kimia Nadjahi, Liva Ralaivola
We introduce a principled way of computing the Wasserstein distance between two distributions in a federated manner.
no code implementations • 7 Jun 2023 • Skander Karkar, Patrick Gallinari, Alain Rakotomamonjy
We propose a detector of adversarial samples that is based on the view of neural networks as discrete dynamic systems.
1 code implementation • NeurIPS 2023 • Jean-Yves Franceschi, Mike Gartrell, Ludovic Dos Santos, Thibaut Issenhuth, Emmanuel de Bézenac, Mickaël Chen, Alain Rakotomamonjy
Particle-based deep generative models, such as gradient flows and score-based diffusion models, have recently gained traction thanks to their striking performance.
2 code implementations • 10 Mar 2023 • Clément Bonet, Benoît Malézieux, Alain Rakotomamonjy, Lucas Drumetz, Thomas Moreau, Matthieu Kowalski, Nicolas Courty
When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals.
no code implementations • 30 Jan 2023 • Hugo Lerogeron, Romain Picot-Clemente, Alain Rakotomamonjy, Laurent Heutte
Dynamic Time Wrapping (DTW) is a widely used algorithm for measuring similarities between two time series.
no code implementations • 26 Jan 2023 • Alain Rakotomamonjy, Maxime Vono, Hamlet Jesse Medina Ruiz, Liva Ralaivola
Most personalised federated learning (FL) approaches assume that raw data of all clients are defined in a common subspace i. e. all clients store their data according to the same schema.
1 code implementation • 29 Sep 2022 • Yuan Yin, Matthieu Kirchmeyer, Jean-Yves Franceschi, Alain Rakotomamonjy, Patrick Gallinari
Effective data-driven PDE forecasting methods often rely on fixed spatial and / or temporal discretizations.
3 code implementations • 27 Jun 2022 • Thomas Moreau, Mathurin Massias, Alexandre Gramfort, Pierre Ablin, Pierre-Antoine Bannier, Benjamin Charlier, Mathieu Dagréou, Tom Dupré La Tour, Ghislain Durif, Cassio F. Dantas, Quentin Klopfenstein, Johan Larsson, En Lai, Tanguy Lefort, Benoit Malézieux, Badr Moufad, Binh T. Nguyen, Alain Rakotomamonjy, Zaccharie Ramzi, Joseph Salmon, Samuel Vaiter
Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice.
1 code implementation • 7 Jun 2022 • Ruben Ohana, Kimia Nadjahi, Alain Rakotomamonjy, Liva Ralaivola
The Sliced-Wasserstein distance (SW) is a computationally efficient and theoretically grounded alternative to the Wasserstein distance.
2 code implementations • 19 May 2022 • Alexandre Ramé, Matthieu Kirchmeyer, Thibaud Rahier, Alain Rakotomamonjy, Patrick Gallinari, Matthieu Cord
Standard neural networks struggle to generalize under distribution shifts in computer vision.
1 code implementation • 1 Feb 2022 • Matthieu Kirchmeyer, Yuan Yin, Jérémie Donà, Nicolas Baskiotis, Alain Rakotomamonjy, Patrick Gallinari
Data-driven approaches to modeling physical systems fail to generalize to unseen systems that share the same general dynamics with the learning domain, but correspond to different physical contexts.
1 code implementation • ICLR 2022 • Matthieu Kirchmeyer, Alain Rakotomamonjy, Emmanuel de Bezenac, Patrick Gallinari
We consider the problem of unsupervised domain adaptation (UDA) between a source and a target domain under conditional and label shift a. k. a Generalized Target Shift (GeTarS).
no code implementations • 20 Oct 2021 • Alain Rakotomamonjy, Mokhtar Z. Alaya, Maxime Berar, Gilles Gasso
In this paper, we analyze the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian smoothed sliced divergences.
1 code implementation • 16 Sep 2021 • Matthieu Kirchmeyer, Patrick Gallinari, Alain Rakotomamonjy, Amin Mantrach
Moreover, we compare the target error of our Adaptation-imputation framework and the "ideal" target error of a UDA classifier without missing target components.
1 code implementation • 5 Jul 2021 • Alain Rakotomamonjy, Liva Ralaivola
Developing machine learning methods that are privacy preserving is today a central topic of research, with huge practical impacts.
no code implementations • NeurIPS 2021 • Ruben Ohana, Hamlet J. Medina Ruiz, Julien Launay, Alessandro Cappelli, Iacopo Poli, Liva Ralaivola, Alain Rakotomamonjy
Optical Processing Units (OPUs) -- low-power photonic chips dedicated to large scale random projections -- have been used in previous work to train deep neural networks using Direct Feedback Alignment (DFA), an effective alternative to backpropagation.
no code implementations • 4 Jun 2021 • Mokhtar Z. Alaya, Gilles Gasso, Maxime Berar, Alain Rakotomamonjy
We provide a theoretical analysis of this new divergence, called $\textit{heterogeneous Wasserstein discrepancy (HWD)}$, and we show that it preserves several interesting properties including rotation-invariance.
no code implementations • NeurIPS Workshop LMCA 2020 • Lucas Anquetil, Mike Gartrell, Alain Rakotomamonjy, Ugo Tanielian, Clément Calauzènes
Through an evaluation on a real-world dataset, we show that our Wasserstein learning approach provides significantly improved predictive performance on a generative task compared to DPPs trained using MLE.
1 code implementation • ICML 2020 • Hachem Kadri, Stéphane Ayache, Riikka Huusari, Alain Rakotomamonjy, Liva Ralaivola
The trace regression model, a direct extension of the well-studied linear regression model, allows one to map matrices to real-valued outputs.
no code implementations • 24 Jun 2020 • Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Joseph Salmon
Owing to their statistical properties, non-convex sparse regularizers have attracted much interest for estimating a sparse linear model from high dimensional data.
1 code implementation • 23 Jun 2020 • Rosanna Turrisi, Rémi Flamary, Alain Rakotomamonjy, Massimiliano Pontil
The problem of domain adaptation on an unlabeled target dataset using knowledge from multiple labelled source datasets is becoming increasingly important.
1 code implementation • 15 Jun 2020 • Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Mokhtar Z. Alaya, Maxime Berar, Nicolas Courty
We address the problem of unsupervised domain adaptation under the setting of generalized target shift (joint class-conditional and label shifts).
no code implementations • 19 Feb 2020 • Mokhtar Z. Alaya, Maxime Bérar, Gilles Gasso, Alain Rakotomamonjy
Unlike Gromov-Wasserstein (GW) distance which compares pairwise distances of elements from each distribution, we consider a method allowing to embed the metric measure spaces in a common Euclidean space and compute an optimal transport (OT) on the embedded distributions.
no code implementations • NeurIPS 2019 • Abraham Traore, Maxime Berar, Alain Rakotomamonjy
This paper introduces a new approach for the scalable Tucker decomposition problem.
no code implementations • 25 Sep 2019 • Matthieu Kirchmeyer, Patrick Gallinari, Alain Rakotomamonjy, Amin Mantrach
Motivated by practical applications, we consider unsupervised domain adaptation for classification problems, in the presence of missing data in the target domain.
1 code implementation • NeurIPS 2019 • Mokhtar Z. Alaya, Maxime Bérar, Gilles Gasso, Alain Rakotomamonjy
We introduce in this paper a novel strategy for efficiently approximating the Sinkhorn distance between two discrete measures.
no code implementations • 16 Feb 2019 • Alain Rakotomamonjy, Gilles Gasso, Joseph Salmon
Leveraging on the convexity of the Lasso problem , screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process.
no code implementations • 1 Mar 2018 • Alain Rakotomamonjy, Abraham Traoré, Maxime Berar, Rémi Flamary, Nicolas Courty
This paper presents a distance-based discriminative framework for learning with probability distributions.
no code implementations • 2 Nov 2017 • Rafael Will M de Araujo, Roberto Hirata, Alain Rakotomamonjy
Traditional dictionary learning methods are based on quadratic convex loss function and thus are sensitive to outliers.
2 code implementations • NeurIPS 2017 • Nicolas Courty, Rémi Flamary, Amaury Habrard, Alain Rakotomamonjy
This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function $f$ in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known.
1 code implementation • 29 Aug 2016 • Rémi Flamary, Marco Cuturi, Nicolas Courty, Alain Rakotomamonjy
Wasserstein Discriminant Analysis (WDA) is a new supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace.
no code implementations • 23 Jun 2016 • Rémi Flamary, Alain Rakotomamonjy, Gilles Gasso
As the number of samples and dimensionality of optimization problems related to statistics an machine learning explode, block coordinate descent algorithms have gained popularity since they reduce the original problem to several smaller ones.
no code implementations • 28 Oct 2015 • Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Alain Rakotomamonjy, Julien Audiffren
In this paper we consider the problems of supervised classification and regression in the case where attributes and labels are functions: a data is represented by a set of functions, and the label is also a function.
no code implementations • 22 Oct 2015 • Alain Rakotomamonjy, Rémi Flamary, Nicolas Courty
The objectives of this technical report is to provide additional results on the generalized conditional gradient methods introduced by Bredies et al. [BLM05].
no code implementations • 20 Aug 2015 • Alain Rakotomamonjy, Gilles Gasso
This paper addresses the problem of audio scenes classification and contributes to the state of the art by proposing a novel feature.
no code implementations • 2 Jul 2015 • Alain Rakotomamonjy, Remi Flamary, Gilles Gasso
We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are non-convex but belong to the class of difference of convex (DC) functions.
no code implementations • 2 Jul 2015 • Nicolas Courty, Rémi Flamary, Devis Tuia, Alain Rakotomamonjy
Domain adaptation from one data space (or domain) to another is one of the most challenging tasks of modern data analytics.
no code implementations • 14 Mar 2014 • Rémi Flamary, Nisrine Jrad, Ronald Phlypo, Marco Congedo, Alain Rakotomamonjy
This framework is extended to the multi-task learning situation where several similar classification tasks related to different subjects are learned simultaneously.
no code implementations • NeurIPS 2012 • Hachem Kadri, Alain Rakotomamonjy, Philippe Preux, Francis R. Bach
We study this problem in the case of kernel ridge regression for functional responses with an lr-norm constraint on the combination coefficients.
no code implementations • Front. Neurosci., Sec. Neuroprosthetics 2012 • Rémi Flamary, Alain Rakotomamonjy
As a witness of the BCI community increasing interest toward such a problem, the fourth BCI Competition provides a dataset which aim is to predict individual finger movements from ECoG signals.
no code implementations • NeurIPS 2008 • Yves Grandvalet, Alain Rakotomamonjy, Joseph Keshet, Stéphane Canu
We consider the problem of binary classification where the classifier may abstain instead of classifying each observation.