Search Results for author: Gilles Blanchard

Found 34 papers, 6 papers with code

Estimation of multiple mean vectors in high dimension

no code implementations22 Mar 2024 Gilles Blanchard, Jean-Baptiste Fermanian, Hannah Marienwald

We endeavour to estimate numerous multi-dimensional means of various probability distributions on a common space based on independent samples.

Transductive conformal inference with adaptive scores

1 code implementation27 Oct 2023 Ulysse Gazin, Gilles Blanchard, Etienne Roquain

Conformal inference is a fundamental and versatile tool that provides distribution-free guarantees for many machine learning tasks.

Novelty Detection Transfer Learning

Label Shift Quantification with Robustness Guarantees via Distribution Feature Matching

no code implementations7 Jun 2023 Bastien Dussap, Gilles Blanchard, Badr-Eddine Chérief-Abdellatif

Quantification learning deals with the task of estimating the target label distribution under label shift.

Covariance Adaptive Best Arm Identification

no code implementations5 Jun 2023 El Mehdi Saad, Gilles Blanchard, Nicolas Verzelen

This framework allows the learner to estimate the covariance among the arms distributions, enabling a more efficient identification of the best arm.

Statistical learning on measures: an application to persistence diagrams

1 code implementation15 Mar 2023 Olympio Hacquard, Gilles Blanchard, Clément Levrard

We consider a binary supervised learning classification problem where instead of having data in a finite-dimensional Euclidean space, we observe measures on a compact space $\mathcal{X}$.

Fast rates for prediction with limited expert advice

no code implementations NeurIPS 2021 El Mehdi Saad, Gilles Blanchard

We investigate the problem of minimizing the excess generalization error with respect to the best expert prediction in a finite family in the stochastic setting, under limited access to information.

Topologically penalized regression on manifolds

no code implementations26 Oct 2021 Olympio Hacquard, Krishnakumar Balasubramanian, Gilles Blanchard, Clément Levrard, Wolfgang Polonik

We study a regression problem on a compact manifold M. In order to take advantage of the underlying geometry and topology of the data, the regression task is performed on the basis of the first several eigenfunctions of the Laplace-Beltrami operator of the manifold, that are regularized with topological penalties.

regression

Error rate control for classification rules in multiclass mixture models

no code implementations29 Sep 2021 Tristan Mary-Huard, Vittorio Perduca, Gilles Blanchard, Martin-Magniette Marie-Laure

In the context of finite mixture models one considers the problem of classifying as many observations as possible in the classes of interest while controlling the classification error rate in these same classes.

Classification Vocal Bursts Type Prediction

Nonasymptotic one-and two-sample tests in high dimension with unknown covariance structure

no code implementations1 Sep 2021 Gilles Blanchard, Jean-Baptiste Fermanian

A particular attention is given to the dependence in the pseudo-dimension $d_*$ of the distribution, defined as $d_* := \|\Sigma\|_2^2/\|\Sigma\|_\infty^2$.

Online Orthogonal Matching Pursuit

no code implementations22 Nov 2020 El Mehdi Saad, Gilles Blanchard, Sylvain Arlot

Greedy algorithms for feature selection are widely used for recovering sparse high-dimensional vectors in linear models.

feature selection regression

High-Dimensional Multi-Task Averaging and Application to Kernel Mean Embedding

no code implementations13 Nov 2020 Hannah Marienwald, Jean-Baptiste Fermanian, Gilles Blanchard

We propose an improved estimator for the multi-task averaging problem, whose goal is the joint estimation of the means of multiple distributions using separate, independent data sets.

Vocal Bursts Intensity Prediction

Statistical Learning Guarantees for Compressive Clustering and Compressive Mixture Modeling

no code implementations17 Apr 2020 Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin

We provide statistical learning guarantees for two unsupervised learning tasks in the context of compressive statistical learning, a general framework for resource-efficient large-scale learning that we introduced in a companion paper. The principle of compressive statistical learning is to compress a training collection, in one pass, into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.

Clustering

Volume Doubling Condition and a Local Poincaré Inequality on Unweighted Random Geometric Graphs

no code implementations6 Jul 2019 Franziska Göbel, Gilles Blanchard

The aim of this paper is to establish two fundamental measure-metric properties of particular random geometric graphs.

Efficient Regularized Piecewise-Linear Regression Trees

no code implementations29 Jun 2019 Leonidas Lefakis, Oleksandr Zadorozhnyi, Gilles Blanchard

We present a detailed analysis of the class of regression decision tree algorithms which employ a regulized piecewise-linear node-splitting criterion and have regularized linear models at the leaves.

regression Variable Selection

Restless dependent bandits with fading memory

no code implementations25 Jun 2019 Oleksandr Zadorozhnyi, Gilles Blanchard, Alexandra Carpentier

The analysis of slow mixing scenario is supported with a minmax lower bound, which (up to a $\log(T)$ factor) matches the obtained upper bound.

Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems

no code implementations14 Feb 2019 Abhishake Rastogi, Gilles Blanchard, Peter Mathé

We study a non-linear statistical inverse learning problem, where we observe the noisy image of a quantity through a non-linear operator at some random design points.

Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods

no code implementations5 Dec 2017 Gilles Blanchard, Oleksandr Zadorozhnyi

We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm.

Vocal Bursts Type Prediction

Domain Generalization by Marginal Transfer Learning

2 code implementations21 Nov 2017 Gilles Blanchard, Aniket Anand Deshmukh, Urun Dogan, Gyemin Lee, Clayton Scott

In the problem of domain generalization (DG), there are labeled training data sets from several related prediction problems, and the goal is to make accurate predictions on future unlabeled data sets that are not known to the learner.

Domain Generalization General Classification +1

Early stopping for statistical inverse problems via truncated SVD estimation

1 code implementation19 Oct 2017 Gilles Blanchard, Marc Hoffmann, Markus Reiß

We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension $D$.

Statistics Theory Statistics Theory 65J20, 62G07

Compressive Statistical Learning with Random Feature Moments

no code implementations22 Jun 2017 Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin

We describe a general framework -- compressive statistical learning -- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.

Clustering

Kernel regression, minimax rates and effective dimensionality: beyond the regular case

no code implementations12 Nov 2016 Gilles Blanchard, Nicole Mücke

These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator.

regression

Parallelizing Spectral Algorithms for Kernel Learning

no code implementations24 Oct 2016 Gilles Blanchard, Nicole Mücke

We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an RKHS framework.

regression

Optimal adaptation for early stopping in statistical inverse problems

1 code implementation24 Jun 2016 Gilles Blanchard, Marc Hoffmann, Markus Reiß

For linear inverse problems $Y=\mathsf{A}\mu+\xi$, it is classical to recover the unknown signal $\mu$ by iterative regularisation methods $(\widehat \mu^{(m)}, m=0, 1,\ldots)$ and halt at a data-dependent iteration $\tau$ using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) squared-error $\|\mathsf{A}(\widehat \mu^{(\tau)}-\mu)\|^2$ is controlled.

Statistics Theory Statistics Theory 65J20, 62G07

Optimal Rates For Regularization Of Statistical Inverse Learning Problems

no code implementations14 Apr 2016 Gilles Blanchard, Nicole Mücke

We consider a statistical inverse learning problem, where we observe the image of a function $f$ through a linear operator $A$ at i. i. d.

Permutational Rademacher Complexity: a New Complexity Measure for Transductive Learning

no code implementations12 May 2015 Ilya Tolstikhin, Nikita Zhivotovskiy, Gilles Blanchard

This paper introduces a new complexity measure for transductive learning called Permutational Rademacher Complexity (PRC) and studies its properties.

Learning Theory Transductive Learning

Localized Complexities for Transductive Learning

no code implementations26 Nov 2014 Ilya Tolstikhin, Gilles Blanchard, Marius Kloft

We show two novel concentration inequalities for suprema of empirical processes when sampling without replacement, which both take the variance of the functions into account.

Learning Theory Transductive Learning

Extensions of stability selection using subsamples of observations and covariates

no code implementations18 Jul 2014 Andre Beinrucker, Ürün Dogan, Gilles Blanchard

We introduce extensions of stability selection, a method to stabilise variable selection methods introduced by Meinshausen and B\"uhlmann (J R Stat Soc 72:417-473, 2010).

Variable Selection

Classification with Asymmetric Label Noise: Consistency and Maximal Denoising

no code implementations5 Mar 2013 Gilles Blanchard, Marek Flaska, Gregory Handy, Sara Pozzi, Clayton Scott

For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions.

Classification Denoising +1

The Local Rademacher Complexity of Lp-Norm Multiple Kernel Learning

no code implementations NeurIPS 2011 Marius Kloft, Gilles Blanchard

We derive an upper bound on the local Rademacher complexity of Lp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches.

Optimal learning rates for Kernel Conjugate Gradient regression

no code implementations NeurIPS 2010 Gilles Blanchard, Nicole Krämer

Lower bounds on attainable rates depending on these two quantities were established in earlier literature, and we obtain upper bounds for the considered method that match these lower bounds (up to a log factor) if the true regression function belongs to the reproducing kernel Hilbert space.

regression Supervised dimensionality reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.