no code implementations • 4 Apr 2024 • Mokhtar Z. Alaya, Alain Rakotomamonjy, Maxime Berar, Gilles Gasso
We particularly focus on the Gaussian smoothed sliced Wasserstein distance and prove that it converges with a rate $O(n^{-1/2})$.
no code implementations • 12 Dec 2023 • Marwa Kechaou, Mokhtar Z. Alaya, Romain Hérault, Gilles Gasso
Adversarial learning baselines for domain adaptation (DA) approaches in the context of semantic segmentation are under explored in semi-supervised framework.
no code implementations • 20 Oct 2021 • Alain Rakotomamonjy, Mokhtar Z. Alaya, Maxime Berar, Gilles Gasso
In this paper, we analyze the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian smoothed sliced divergences.
no code implementations • 4 Jun 2021 • Mokhtar Z. Alaya, Gilles Gasso, Maxime Berar, Alain Rakotomamonjy
We provide a theoretical analysis of this new divergence, called $\textit{heterogeneous Wasserstein discrepancy (HWD)}$, and we show that it preserves several interesting properties including rotation-invariance.
no code implementations • 2 Oct 2020 • Marwa Kechaou, Romain Hérault, Mokhtar Z. Alaya, Gilles Gasso
We present a 2-step optimal transport approach that performs a mapping from a source distribution to a target distribution.
1 code implementation • 15 Jun 2020 • Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Mokhtar Z. Alaya, Maxime Berar, Nicolas Courty
We address the problem of unsupervised domain adaptation under the setting of generalized target shift (joint class-conditional and label shifts).
3 code implementations • 19 Feb 2020 • Laetitia Chapel, Mokhtar Z. Alaya, Gilles Gasso
In this paper, we address the partial Wasserstein and Gromov-Wasserstein problems and propose exact algorithms to solve them.
no code implementations • 19 Feb 2020 • Mokhtar Z. Alaya, Maxime Bérar, Gilles Gasso, Alain Rakotomamonjy
Unlike Gromov-Wasserstein (GW) distance which compares pairwise distances of elements from each distribution, we consider a method allowing to embed the metric measure spaces in a common Euclidean space and compute an optimal transport (OT) on the embedded distributions.
1 code implementation • NeurIPS 2019 • Mokhtar Z. Alaya, Maxime Bérar, Gilles Gasso, Alain Rakotomamonjy
We introduce in this paper a novel strategy for efficiently approximating the Sinkhorn distance between two discrete measures.
1 code implementation • 25 Jul 2018 • Simon Bussy, Mokhtar Z. Alaya, Anne-Sophie Jannot, Agathe Guilloux
We introduce the binacox, a prognostic method to deal with the problem of detecting multiple cut-points per features in a multivariate setting where a large number of continuous features are available.
1 code implementation • 24 Jul 2018 • Mokhtar Z. Alaya, Olga Klopp
Usually in matrix completion a single matrix is considered, which can be, for example, a rating matrix in recommendation system.
no code implementations • 24 Mar 2017 • Mokhtar Z. Alaya, Simon Bussy, Stéphane Gaïffas, Agathe Guilloux
In each group of binary features coming from the one-hot encoding of a single raw continuous feature, this penalization uses total-variation regularization together with an extra linear constraint.