Search Results for author: Sylvain Arlot

Found 14 papers, 2 papers with code

One-Shot Federated Conformal Prediction

1 code implementation13 Feb 2023 Pierre Humbert, Batiste Le Bars, Aurélien Bellet, Sylvain Arlot

In this paper, we introduce a conformal prediction method to construct prediction sets in a oneshot federated learning setting.

Conformal Prediction Federated Learning

A Conditional Randomization Test for Sparse Logistic Regression in High-Dimension

no code implementations29 May 2022 Binh T. Nguyen, Bertrand Thirion, Sylvain Arlot

Identifying the relevant variables for a classification model with correct confidence levels is a central but difficult task in high-dimension.

regression Vocal Bursts Intensity Prediction

Online Orthogonal Matching Pursuit

no code implementations22 Nov 2020 El Mehdi Saad, Gilles Blanchard, Sylvain Arlot

Greedy algorithms for feature selection are widely used for recovering sparse high-dimensional vectors in linear models.

feature selection regression

Aggregation of Multiple Knockoffs

2 code implementations ICML 2020 Tuan-Binh Nguyen, Jérôme-Alexis Chevalier, Bertrand Thirion, Sylvain Arlot

We develop an extension of the Knockoff Inference procedure, introduced by Barber and Candes (2015).

Rejoinder on: Minimal penalties and the slope heuristics: a survey

no code implementations30 Sep 2019 Sylvain Arlot

This text is the rejoinder following the discussion of a survey paper about minimal penalties and the slope heuristics (Arlot, 2019.

Model Selection regression +1

Aggregated Hold-Out

no code implementations11 Sep 2019 Guillaume Maillard, Sylvain Arlot, Matthieu Lerasle

Aggregated hold-out (Agghoo) is a method which averages learning rules selected by hold-out (that is, cross-validation with a single split).

Binary Classification General Classification

Minimal penalties and the slope heuristics: a survey

no code implementations22 Jan 2019 Sylvain Arlot

Explicit connections are made with residual-variance estimators-with an original contribution on this topic, showing that for this task the slope heuristics performs almost as well as a residual-based estimator with the best model choice-and some classical algorithms such as L-curve or elbow heuristics, Mallows' C p , and Akaike's FPE.

Cross-validation

no code implementations9 Mar 2017 Sylvain Arlot

This text is a survey on cross-validation.

Comments on: "A Random Forest Guided Tour" by G. Biau and E. Scornet

no code implementations6 Apr 2016 Sylvain Arlot, Robin Genuer

This paper is a comment on the survey paper by Biau and Scornet (2016) about random forests.

Analysis of purely random forests bias

no code implementations15 Jul 2014 Sylvain Arlot, Robin Genuer

Under some regularity assumptions on the regression function, we show that the bias of an infinite forest decreases at a faster rate (with respect to the size of each tree) than a single tree.

regression

Choice of V for V-Fold Cross-Validation in Least-Squares Density Estimation

no code implementations22 Oct 2012 Sylvain Arlot, Matthieu Lerasle

Then, we compute the variance of V-fold cross-validation and related criteria, as well as the variance of key quantities for model selection performance.

Density Estimation Model Selection

Data-driven calibration of linear estimators with minimal penalties

no code implementations NeurIPS 2009 Sylvain Arlot, Francis R. Bach

This paper tackles the problem of selecting among several linear estimators in non-parametric regression; this includes model selection for linear regression, the choice of a regularization parameter in kernel ridge regression or spline smoothing, and the choice of a kernel in multiple kernel learning.

Model Selection regression

Cannot find the paper you are looking for? You can Submit a new open access paper.