no code implementations • 6 Feb 2024 • Florian Valade, Mohamed Hebiri, Paul Gay
The increasing complexity of advanced machine learning models requires innovative approaches to manage computational resources effectively.
no code implementations • 24 Feb 2021 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Titouan Lorieul
Multi-class classification problem is among the most popular and well-studied statistical frameworks.
no code implementations • NeurIPS 2020 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
We study the problem of learning an optimal regression function subject to a fairness constraint.
no code implementations • NeurIPS 2020 • Christophe Denis, Mohamed Hebiri, Ahmed Zaoui
We provide a semi-supervised estimation procedure of the optimal rule involving two datasets: a first labeled dataset is used to estimate both regression function and conditional variance function while a second unlabeled dataset is exploited to calibrate the desired rejection rate.
no code implementations • 28 Jun 2020 • Mohamed Hebiri, Johannes Lederer
Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting.
no code implementations • NeurIPS 2020 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
It demands the distribution of the predicted output to be independent of the sensitive attribute.
1 code implementation • NeurIPS 2019 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
We study the problem of fair binary classification using the notion of Equal Opportunity.
no code implementations • 14 Mar 2017 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Joseph Salmon
The modern multi-label problems are typically large-scale in terms of number of observations, features and labels, and the amount of labels can even be comparable with the amount of observations.
no code implementations • 7 Feb 2014 • Arnak S. Dalalyan, Mohamed Hebiri, Johannes Lederer
Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood.
no code implementations • 16 Apr 2013 • Arnak S. Dalalyan, Mohamed Hebiri, Katia Méziani, Joseph Salmon
Popular sparse estimation methods based on $\ell_1$-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter.