Search Results for author: Alain Rakotomamonjy

Found 40 papers, 16 papers with code

Adversarial Sample Detection Through Neural Network Transport Dynamics

no code implementations7 Jun 2023 Skander Karkar, Patrick Gallinari, Alain Rakotomamonjy

We propose a detector of adversarial samples that is based on the view of neural networks as discrete dynamic systems.

Unifying GANs and Score-Based Diffusion as Generative Particle Models

1 code implementation25 May 2023 Jean-Yves Franceschi, Mike Gartrell, Ludovic Dos Santos, Thibaut Issenhuth, Emmanuel de Bézenac, Mickaël Chen, Alain Rakotomamonjy

In this paper, we challenge this interpretation and propose a novel framework that unifies particle and adversarial generative models by framing generator training as a generalization of particle models.

Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals

2 code implementations10 Mar 2023 Clément Bonet, Benoît Malézieux, Alain Rakotomamonjy, Lucas Drumetz, Thomas Moreau, Matthieu Kowalski, Nicolas Courty

When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals.

Domain Adaptation EEG +3

Personalised Federated Learning On Heterogeneous Feature Spaces

no code implementations26 Jan 2023 Alain Rakotomamonjy, Maxime Vono, Hamlet Jesse Medina Ruiz, Liva Ralaivola

Most personalised federated learning (FL) approaches assume that raw data of all clients are defined in a common subspace i. e. all clients store their data according to the same schema.

Federated Learning

Continuous PDE Dynamics Forecasting with Implicit Neural Representations

1 code implementation29 Sep 2022 Yuan Yin, Matthieu Kirchmeyer, Jean-Yves Franceschi, Alain Rakotomamonjy, Patrick Gallinari

Effective data-driven PDE forecasting methods often rely on fixed spatial and / or temporal discretizations.

Shedding a PAC-Bayesian Light on Adaptive Sliced-Wasserstein Distances

1 code implementation7 Jun 2022 Ruben Ohana, Kimia Nadjahi, Alain Rakotomamonjy, Liva Ralaivola

The Sliced-Wasserstein distance (SW) is a computationally efficient and theoretically grounded alternative to the Wasserstein distance.

Generalization Bounds

Generalizing to New Physical Systems via Context-Informed Dynamics Model

1 code implementation1 Feb 2022 Matthieu Kirchmeyer, Yuan Yin, Jérémie Donà, Nicolas Baskiotis, Alain Rakotomamonjy, Patrick Gallinari

Data-driven approaches to modeling physical systems fail to generalize to unseen systems that share the same general dynamics with the learning domain, but correspond to different physical contexts.

Mapping conditional distributions for domain adaptation under generalized target shift

1 code implementation ICLR 2022 Matthieu Kirchmeyer, Alain Rakotomamonjy, Emmanuel de Bezenac, Patrick Gallinari

We consider the problem of unsupervised domain adaptation (UDA) between a source and a target domain under conditional and label shift a. k. a Generalized Target Shift (GeTarS).

Unsupervised Domain Adaptation

Statistical and Topological Properties of Gaussian Smoothed Sliced Probability Divergences

no code implementations20 Oct 2021 Alain Rakotomamonjy, Mokhtar Z. Alaya, Maxime Berar, Gilles Gasso

In this paper, we analyze the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian smoothed sliced divergences.

Domain Adaptation Privacy Preserving

Unsupervised domain adaptation with non-stochastic missing data

1 code implementation16 Sep 2021 Matthieu Kirchmeyer, Patrick Gallinari, Alain Rakotomamonjy, Amin Mantrach

Moreover, we compare the target error of our Adaptation-imputation framework and the "ideal" target error of a UDA classifier without missing target components.

Classification Imputation +1

Differentially Private Sliced Wasserstein Distance

1 code implementation5 Jul 2021 Alain Rakotomamonjy, Liva Ralaivola

Developing machine learning methods that are privacy preserving is today a central topic of research, with huge practical impacts.

Domain Adaptation Privacy Preserving

Photonic Differential Privacy with Direct Feedback Alignment

no code implementations NeurIPS 2021 Ruben Ohana, Hamlet J. Medina Ruiz, Julien Launay, Alessandro Cappelli, Iacopo Poli, Liva Ralaivola, Alain Rakotomamonjy

Optical Processing Units (OPUs) -- low-power photonic chips dedicated to large scale random projections -- have been used in previous work to train deep neural networks using Direct Feedback Alignment (DFA), an effective alternative to backpropagation.

Heterogeneous Wasserstein Discrepancy for Incomparable Distributions

no code implementations4 Jun 2021 Mokhtar Z. Alaya, Gilles Gasso, Maxime Berar, Alain Rakotomamonjy

We provide a theoretical analysis of this new divergence, called $\textit{heterogeneous Wasserstein discrepancy (HWD)}$, and we show that it preserves several interesting properties including rotation-invariance.

Wasserstein Learning of Determinantal Point Processes

no code implementations NeurIPS Workshop LMCA 2020 Lucas Anquetil, Mike Gartrell, Alain Rakotomamonjy, Ugo Tanielian, Clément Calauzènes

Through an evaluation on a real-world dataset, we show that our Wasserstein learning approach provides significantly improved predictive performance on a generative task compared to DPPs trained using MLE.

Point Processes

Partial Trace Regression and Low-Rank Kraus Decomposition

1 code implementation ICML 2020 Hachem Kadri, Stéphane Ayache, Riikka Huusari, Alain Rakotomamonjy, Liva Ralaivola

The trace regression model, a direct extension of the well-studied linear regression model, allows one to map matrices to real-valued outputs.

Matrix Completion regression

Provably Convergent Working Set Algorithm for Non-Convex Regularized Regression

no code implementations24 Jun 2020 Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Joseph Salmon

Owing to their statistical properties, non-convex sparse regularizers have attracted much interest for estimating a sparse linear model from high dimensional data.


Multi-source Domain Adaptation via Weighted Joint Distributions Optimal Transport

1 code implementation23 Jun 2020 Rosanna Turrisi, Rémi Flamary, Alain Rakotomamonjy, Massimiliano Pontil

The problem of domain adaptation on an unlabeled target dataset using knowledge from multiple labelled source datasets is becoming increasingly important.

Domain Adaptation

Optimal Transport for Conditional Domain Matching and Label Shift

1 code implementation15 Jun 2020 Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Mokhtar Z. Alaya, Maxime Berar, Nicolas Courty

We address the problem of unsupervised domain adaptation under the setting of generalized target shift (joint class-conditional and label shifts).

Unsupervised Domain Adaptation

Theoretical Guarantees for Bridging Metric Measure Embedding and Optimal Transport

no code implementations19 Feb 2020 Mokhtar Z. Alaya, Maxime Bérar, Gilles Gasso, Alain Rakotomamonjy

Unlike Gromov-Wasserstein (GW) distance which compares pairwise distances of elements from each distribution, we consider a method allowing to embed the metric measure spaces in a common Euclidean space and compute an optimal transport (OT) on the embedded distributions.

Unsupervised domain adaptation with imputation

no code implementations25 Sep 2019 Matthieu Kirchmeyer, Patrick Gallinari, Alain Rakotomamonjy, Amin Mantrach

Motivated by practical applications, we consider unsupervised domain adaptation for classification problems, in the presence of missing data in the target domain.

Classification Imputation +1

Screening Rules for Lasso with Non-Convex Sparse Regularizers

no code implementations16 Feb 2019 Alain Rakotomamonjy, Gilles Gasso, Joseph Salmon

Leveraging on the convexity of the Lasso problem , screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process.

Distance Measure Machines

no code implementations1 Mar 2018 Alain Rakotomamonjy, Abraham Traoré, Maxime Berar, Rémi Flamary, Nicolas Courty

This paper presents a distance-based discriminative framework for learning with probability distributions.

Concave losses for robust dictionary learning

no code implementations2 Nov 2017 Rafael Will M de Araujo, Roberto Hirata, Alain Rakotomamonjy

Traditional dictionary learning methods are based on quadratic convex loss function and thus are sensitive to outliers.

Dictionary Learning

Joint Distribution Optimal Transportation for Domain Adaptation

2 code implementations NeurIPS 2017 Nicolas Courty, Rémi Flamary, Amaury Habrard, Alain Rakotomamonjy

This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function $f$ in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known.

Unsupervised Domain Adaptation

Wasserstein Discriminant Analysis

1 code implementation29 Aug 2016 Rémi Flamary, Marco Cuturi, Nicolas Courty, Alain Rakotomamonjy

Wasserstein Discriminant Analysis (WDA) is a new supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace.

Importance sampling strategy for non-convex randomized block-coordinate descent

no code implementations23 Jun 2016 Rémi Flamary, Alain Rakotomamonjy, Gilles Gasso

As the number of samples and dimensionality of optimization problems related to statistics an machine learning explode, block coordinate descent algorithms have gained popularity since they reduce the original problem to several smaller ones.

Operator-valued Kernels for Learning from Functional Response Data

no code implementations28 Oct 2015 Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Alain Rakotomamonjy, Julien Audiffren

In this paper we consider the problems of supervised classification and regression in the case where attributes and labels are functions: a data is represented by a set of functions, and the label is also a function.

Audio Signal Processing General Classification

Generalized conditional gradient: analysis of convergence and applications

no code implementations22 Oct 2015 Alain Rakotomamonjy, Rémi Flamary, Nicolas Courty

The objectives of this technical report is to provide additional results on the generalized conditional gradient methods introduced by Bredies et al. [BLM05].

Histogram of gradients of Time-Frequency Representations for Audio scene detection

no code implementations20 Aug 2015 Alain Rakotomamonjy, Gilles Gasso

This paper addresses the problem of audio scenes classification and contributes to the state of the art by proposing a novel feature.

General Classification Scene Classification

DC Proximal Newton for Non-Convex Optimization Problems

no code implementations2 Jul 2015 Alain Rakotomamonjy, Remi Flamary, Gilles Gasso

We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are non-convex but belong to the class of difference of convex (DC) functions.

Transductive Learning

Optimal Transport for Domain Adaptation

no code implementations2 Jul 2015 Nicolas Courty, Rémi Flamary, Devis Tuia, Alain Rakotomamonjy

Domain adaptation from one data space (or domain) to another is one of the most challenging tasks of modern data analytics.

Domain Adaptation

Mixed-norm Regularization for Brain Decoding

no code implementations14 Mar 2014 Rémi Flamary, Nisrine Jrad, Ronald Phlypo, Marco Congedo, Alain Rakotomamonjy

This framework is extended to the multi-task learning situation where several similar classification tasks related to different subjects are learned simultaneously.

Brain Decoding General Classification +1

Multiple Operator-valued Kernel Learning

no code implementations NeurIPS 2012 Hachem Kadri, Alain Rakotomamonjy, Philippe Preux, Francis R. Bach

We study this problem in the case of kernel ridge regression for functional responses with an lr-norm constraint on the combination coefficients.


Decoding finger movements from ECoG signals using switching linear models

no code implementations Front. Neurosci., Sec. Neuroprosthetics 2012 Rémi Flamary, Alain Rakotomamonjy

As a witness of the BCI community increasing interest toward such a problem, the fourth BCI Competition provides a dataset which aim is to predict individual finger movements from ECoG signals.

Brain Decoding

Support Vector Machines with a Reject Option

no code implementations NeurIPS 2008 Yves Grandvalet, Alain Rakotomamonjy, Joseph Keshet, Stéphane Canu

We consider the problem of binary classification where the classifier may abstain instead of classifying each observation.

Binary Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.