Search Results for author: Amaury Habrard

Found 35 papers, 13 papers with code

Leveraging PAC-Bayes Theory and Gibbs Distributions for Generalization Bounds with Complexity Measures

1 code implementation19 Feb 2024 Paul Viallard, Rémi Emonet, Amaury Habrard, Emilie Morvant, Valentina Zantedeschi

In statistical learning theory, a generalization bound usually involves a complexity measure imposed by the considered theoretical framework.

Generalization Bounds Learning Theory

Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient ?

1 code implementation30 Oct 2023 Quentin Bouniot, Angélique Loesch, Romaric Audigier, Amaury Habrard

For specialized and dense downstream tasks such as object detection, labeling data requires expertise and can be very expensive, making few-shot and semi-supervised models much more attractive alternatives.

Object object-detection +2

Proposal-Contrastive Pretraining for Object Detection from Fewer Data

no code implementations25 Oct 2023 Quentin Bouniot, Romaric Audigier, Angélique Loesch, Amaury Habrard

However, for unsupervised pretraining, the popular contrastive learning requires a large batch size and, therefore, a lot of resources.

Contrastive Learning Object +2

A Simple Way to Learn Metrics Between Attributed Graphs

no code implementations26 Sep 2022 Yacouba Kaloga, Pierre Borgnat, Amaury Habrard

Therefore, many metric learning algorithms have been developed in recent years, mainly for Euclidean data in order to improve performance of classification or clustering methods.

Metric Learning

Self-Bounding Majority Vote Learning Algorithms by the Direct Minimization of a Tight PAC-Bayesian C-Bound

1 code implementation28 Apr 2021 Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant

In the PAC-Bayesian literature, the C-Bound refers to an insightful relation between the risk of a majority vote classifier (under the zero-one loss) and the first two moments of its margin (i. e., the expected margin and the voters' diversity).

Generalization Bounds

A PAC-Bayes Analysis of Adversarial Robustness

1 code implementation NeurIPS 2021 Paul Viallard, Guillaume Vidot, Amaury Habrard, Emilie Morvant

We propose the first general PAC-Bayesian generalization bounds for adversarial robustness, that estimate, at test time, how much a model will be invariant to imperceptible perturbations in the input.

Adversarial Robustness Generalization Bounds +1

A General Framework for the Practical Disintegration of PAC-Bayesian Bounds

1 code implementation17 Feb 2021 Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant

PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers.

Generalization Bounds

Improving Few-Shot Learning through Multi-task Representation Learning Theory

1 code implementation5 Oct 2020 Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard

In this paper, we consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task.

Continual Learning Few-Shot Learning +2

Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms

no code implementations28 Sep 2020 Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard

To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of meta-learning theory into practice for the popular task of few-shot classification.

Few-Shot Learning Learning Theory

A survey on domain adaptation theory: learning bounds and theoretical guarantees

no code implementations24 Apr 2020 Ievgen Redko, Emilie Morvant, Amaury Habrard, Marc Sebban, Younès Bennani

Despite a large amount of different transfer learning scenarios, the main objective of this survey is to provide an overview of the state-of-the-art theoretical results in a specific, and arguably the most popular, sub-field of transfer learning, called domain adaptation.

BIG-bench Machine Learning Domain Adaptation +1

Metric Learning from Imbalanced Data

no code implementations4 Sep 2019 Léo Gautheron, Emilie Morvant, Amaury Habrard, Marc Sebban

A key element of any machine learning algorithm is the use of a function that measures the dis/similarity between data points.

BIG-bench Machine Learning Metric Learning

An Adjusted Nearest Neighbor Algorithm Maximizing the F-Measure from Imbalanced Data

no code implementations2 Sep 2019 Rémi Viola, Rémi Emonet, Amaury Habrard, Guillaume Metzler, Sébastien Riou, Marc Sebban

In this paper, we address the challenging problem of learning from imbalanced data using a Nearest-Neighbor (NN) algorithm.

Fraud Detection

Learning Landmark-Based Ensembles with Random Fourier Features and Gradient Boosting

no code implementations14 Jun 2019 Léo Gautheron, Pascal Germain, Amaury Habrard, Emilie Morvant, Marc Sebban, Valentina Zantedeschi

Unlike state-of-the-art Multiple Kernel Learning techniques that make use of a pre-computed dictionary of kernel functions to select from, at each iteration we fit a kernel by approximating it as a weighted sum of Random Fourier Features (RFF) and by optimizing their barycenter.

Deep multi-Wasserstein unsupervised domain adaptation

2 code implementations Pattern Recognition Letters 2019 Tien-Nam Le, Amaury Habrard, Marc Sebban

In unsupervised domain adaptation (DA), 1 aims at learning from labeled source data and fully unlabeled target examples a model with a low error on the target domain.

Generalization Bounds Unsupervised Domain Adaptation

Near-lossless Binarization of Word Embeddings

1 code implementation24 Mar 2018 Julien Tissier, Christophe Gravier, Amaury Habrard

Word embeddings are commonly used as a starting point in many NLP models to achieve state-of-the-art performances.

Binarization Semantic Similarity +5

Dict2vec : Learning Word Embeddings using Lexical Dictionaries

1 code implementation EMNLP 2017 Julien Tissier, Christophe Gravier, Amaury Habrard

Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks.

General Classification Knowledge Graphs +8

PAC-Bayes and Domain Adaptation

no code implementations17 Jul 2017 Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant

Firstly, we propose an improvement of the previous approach we proposed in Germain et al. (2013), which relies on a novel distribution pseudodistance based on a disagreement averaging, allowing us to derive a new tighter domain adaptation bound for the target risk.

Domain Adaptation Generalization Bounds

Joint Distribution Optimal Transportation for Domain Adaptation

2 code implementations NeurIPS 2017 Nicolas Courty, Rémi Flamary, Amaury Habrard, Alain Rakotomamonjy

This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function $f$ in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known.

Unsupervised Domain Adaptation

Mapping Estimation for Discrete Optimal Transport

no code implementations NeurIPS 2016 Michaël Perrot, Nicolas Courty, Rémi Flamary, Amaury Habrard

Most of the computational approaches of Optimal Transport use the Kantorovich relaxation of the problem to learn a probabilistic coupling $\mgamma$ but do not address the problem of learning the underlying transport map $\funcT$ linked to the original Monge problem.

Domain Adaptation

Similarity Learning for Time Series Classification

no code implementations15 Oct 2016 Maria-Irina Nicolae, Éric Gaussier, Amaury Habrard, Marc Sebban

In this paper, we propose a novel method for learning similarities based on DTW, in order to improve time series classification.

Classification Dynamic Time Warping +4

Theoretical Analysis of Domain Adaptation with Optimal Transport

no code implementations14 Oct 2016 Ievgen Redko, Amaury Habrard, Marc Sebban

Domain adaptation (DA) is an important and emerging field of machine learning that tackles the problem occurring when the distributions of training (source domain) and test (target domain) data are similar but different.

Domain Adaptation

Regressive Virtual Metric Learning

no code implementations NeurIPS 2015 Michaël Perrot, Amaury Habrard

In this paper, instead of bringing closer examples of the same class and pushing far away examples of different classes we propose to move the examples with respect to virtual points.

Metric Learning

A New PAC-Bayesian Perspective on Domain Adaptation

1 code implementation15 Jun 2015 Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant

We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a majority vote model dedicated to a target one.

Domain Adaptation

PAC-Bayesian Theorems for Domain Adaptation with Specialization to Linear Classifiers

no code implementations24 Mar 2015 Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant

In this paper, we provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different target distribution.

Domain Adaptation

Subspace Alignment For Domain Adaptation

no code implementations18 Sep 2014 Basura Fernando, Amaury Habrard, Marc Sebban, Tinne Tuytelaars

We present two approaches to determine the only hyper-parameter in our method corresponding to the size of the subspaces.

Domain Adaptation

Majority Vote of Diverse Classifiers for Late Fusion

no code implementations30 Apr 2014 Emilie Morvant, Amaury Habrard, Stéphane Ayache

Our method is based on an order-preserving pairwise loss adapted to ranking that allows us to improve Mean Averaged Precision measure while taking into account the diversity of the voters that we want to fuse.

Dimension-free Concentration Bounds on Hankel Matrices for Spectral Learning

no code implementations21 Dec 2013 François Denis, Mattias Gybels, Amaury Habrard

Existing concentration bounds seem to indicate that the concentration over $H_r$ gets looser with the size of $H_r$, suggesting to make a trade-off between the quantity of used information and the size of $H_r$.

A Survey on Metric Learning for Feature Vectors and Structured Data

no code implementations28 Jun 2013 Aurélien Bellet, Amaury Habrard, Marc Sebban

The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult.

BIG-bench Machine Learning Metric Learning

Robustness and Generalization for Metric Learning

no code implementations5 Sep 2012 Aurélien Bellet, Amaury Habrard

Metric learning has attracted a lot of interest over the last decade, but the generalization ability of such methods has not been thoroughly studied.

Generalization Bounds Metric Learning

Similarity Learning for Provably Accurate Sparse Linear Classification

no code implementations27 Jun 2012 Aurelien Bellet, Amaury Habrard, Marc Sebban

In recent years, the crucial importance of metrics in machine learning algorithms has led to an increasing interest for optimizing distance and similarity functions.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.