Search Results for author: Mohamed Hebiri

Found 10 papers, 1 papers with code

EERO: Early Exit with Reject Option for Efficient Classification with limited budget

no code implementations6 Feb 2024 Florian Valade, Mohamed Hebiri, Paul Gay

The increasing complexity of advanced machine learning models requires innovative approaches to manage computational resources effectively.

Regression with reject option and application to kNN

no code implementations NeurIPS 2020 Christophe Denis, Mohamed Hebiri, Ahmed Zaoui

We provide a semi-supervised estimation procedure of the optimal rule involving two datasets: a first labeled dataset is used to estimate both regression function and conditional variance function while a second unlabeled dataset is exploited to calibrate the desired rejection rate.

regression

Layer Sparsity in Neural Networks

no code implementations28 Jun 2020 Mohamed Hebiri, Johannes Lederer

Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting.

BIG-bench Machine Learning

On the benefits of output sparsity for multi-label classification

no code implementations14 Mar 2017 Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Joseph Salmon

The modern multi-label problems are typically large-scale in terms of number of observations, features and labels, and the amount of labels can even be comparable with the amount of observations.

Classification General Classification +2

On the Prediction Performance of the Lasso

no code implementations7 Feb 2014 Arnak S. Dalalyan, Mohamed Hebiri, Johannes Lederer

Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood.

regression

Learning Heteroscedastic Models by Convex Programming under Group Sparsity

no code implementations16 Apr 2013 Arnak S. Dalalyan, Mohamed Hebiri, Katia Méziani, Joseph Salmon

Popular sparse estimation methods based on $\ell_1$-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.