1 code implementation • ICML 2020 • Sofien Dhouib, Ievgen Redko, Tanguy Kerdoncuff, Rémi Emonet, Marc Sebban
The Optimal transport (OT) problem and its associated Wasserstein distance have recently become a topic of great interest in the machine learning community.
1 code implementation • 29 Apr 2021 • Sibylle Marcotte, Amélie Barbe, Rémi Gribonval, Titouan Vayer, Marc Sebban, Pierre Borgnat, Paulo Gonçalves
Diffusing a graph signal at multiple scales requires computing the action of the exponential of several multiples of the Laplacian matrix.
no code implementations • 24 Apr 2020 • Ievgen Redko, Emilie Morvant, Amaury Habrard, Marc Sebban, Younès Bennani
Despite a large amount of different transfer learning scenarios, the main objective of this survey is to provide an overview of the state-of-the-art theoretical results in a specific, and arguably the most popular, sub-field of transfer learning, called domain adaptation.
no code implementations • 4 Sep 2019 • Léo Gautheron, Emilie Morvant, Amaury Habrard, Marc Sebban
A key element of any machine learning algorithm is the use of a function that measures the dis/similarity between data points.
no code implementations • 2 Sep 2019 • Rémi Viola, Rémi Emonet, Amaury Habrard, Guillaume Metzler, Sébastien Riou, Marc Sebban
In this paper, we address the challenging problem of learning from imbalanced data using a Nearest-Neighbor (NN) algorithm.
no code implementations • 14 Jun 2019 • Léo Gautheron, Pascal Germain, Amaury Habrard, Emilie Morvant, Marc Sebban, Valentina Zantedeschi
Unlike state-of-the-art Multiple Kernel Learning techniques that make use of a pre-computed dictionary of kernel functions to select from, at each iteration we fit a kernel by approximating it as a weighted sum of Random Fourier Features (RFF) and by optimizing their barycenter.
2 code implementations • Pattern Recognition Letters 2019 • Tien-Nam Le, Amaury Habrard, Marc Sebban
In unsupervised domain adaptation (DA), 1 aims at learning from labeled source data and fully unlabeled target examples a model with a low error on the target domain.
no code implementations • 1 Mar 2017 • Valentina Zantedeschi, Rémi Emonet, Marc Sebban
For their ability to capture non-linearities in the data and to scale to large training sets, local Support Vector Machines (SVMs) have received a special attention during the past decade.
no code implementations • NeurIPS 2016 • Valentina Zantedeschi, Rémi Emonet, Marc Sebban
During the past few years, the machine learning community has paid attention to developping new methods for learning from weakly labeled data.
no code implementations • 15 Oct 2016 • Maria-Irina Nicolae, Éric Gaussier, Amaury Habrard, Marc Sebban
In this paper, we propose a novel method for learning similarities based on DTW, in order to improve time series classification.
no code implementations • 14 Oct 2016 • Ievgen Redko, Amaury Habrard, Marc Sebban
Domain adaptation (DA) is an important and emerging field of machine learning that tackles the problem occurring when the distributions of training (source domain) and test (target domain) data are similar but different.
no code implementations • CVPR 2016 • Valentina Zantedeschi, Remi Emonet, Marc Sebban
Over the past ten years, metric learning allowed the improvement of the numerous machine learning approaches that manipulate distances or similarities.
no code implementations • 4 Apr 2016 • Valentina Zantedeschi, Rémi Emonet, Marc Sebban
Many theoretical results in the machine learning domain stand only for functions that are Lipschitz continuous.
no code implementations • CVPR 2015 • Rahaf Aljundi, Remi Emonet, Damien Muselet, Marc Sebban
Domain adaptation (DA) has gained a lot of success in the recent years in computer vision to deal with situations where the learning process has to transfer knowledge from a source to a target domain.
no code implementations • 19 Dec 2014 • Maria-Irina Nicolae, Marc Sebban, Amaury Habrard, Éric Gaussier, Massih-Reza Amini
The notion of metric plays a key role in machine learning problems such as classification, clustering or ranking.
no code implementations • 18 Sep 2014 • Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars
We present two approaches to determine the only hyper-parameter in our method corresponding to the size of the subspaces.
no code implementations • 28 Jun 2013 • Aurélien Bellet, Amaury Habrard, Marc Sebban
The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult.
no code implementations • 27 Jun 2012 • Aurelien Bellet, Amaury Habrard, Marc Sebban
In recent years, the crucial importance of metrics in machine learning algorithms has led to an increasing interest for optimizing distance and similarity functions.