You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 30 Sep 2022 • Tanguy Lefort, Benjamin Charlier, Alexis Joly, Joseph Salmon

We adapt the AUM to identify ambiguous tasks in crowdsourced learning scenarios, introducing the Weighted AUM (WAUM).

no code implementations • 4 Jul 2022 • Paul Mangold, Aurélien Bellet, Joseph Salmon, Marc Tommasi

In this paper, we study differentially private empirical risk minimization (DP-ERM).

1 code implementation • 27 Jun 2022 • Thomas Moreau, Mathurin Massias, Alexandre Gramfort, Pierre Ablin, Pierre-Antoine Bannier, Benjamin Charlier, Mathieu Dagréou, Tom Dupré La Tour, Ghislain Durif, Cassio F. Dantas, Quentin Klopfenstein, Johan Larsson, En Lai, Tanguy Lefort, Benoit Malézieux, Badr Moufad, Binh T. Nguyen, Alain Rakotomamonjy, Zaccharie Ramzi, Joseph Salmon, Samuel Vaiter

Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice.

1 code implementation • 4 Feb 2022 • Camille Garcin, Maximilien Servajean, Alexis Joly, Joseph Salmon

In modern classification tasks, the number of labels is getting larger and larger, as is the size of the datasets encountered in practice.

no code implementations • 15 Dec 2021 • Hashem Ghanem, Joseph Salmon, Nicolas Keriven, Samuel Vaiter

In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error.

1 code implementation • 4 Nov 2021 • Kenan Šehić, Alexandre Gramfort, Joseph Salmon, Luigi Nardi

While Weighted Lasso sparse regression has appealing statistical guarantees that would entail a major real-world impact in finance, genomics, and brain imaging applications, it is typically scarcely adopted due to its complex high-dimensional space composed by thousands of hyperparameters.

no code implementations • 22 Oct 2021 • Paul Mangold, Aurélien Bellet, Joseph Salmon, Marc Tommasi

In this paper, we propose Differentially Private proximal Coordinate Descent (DP-CD), a new method to solve composite DP-ERM problems.

1 code implementation • 27 Jun 2021 • Lang Liu, Joseph Salmon, Zaid Harchaoui

The widespread use of machine learning algorithms calls for automatic change detection algorithms to monitor their behavior over time.

1 code implementation • 4 Jun 2021 • Jérôme-Alexis Chevalier, Tuan-Binh Nguyen, Bertrand Thirion, Joseph Salmon

This calls for a reformulation of the statistical inference problem, that takes into account the underlying spatial structure: if covariates are locally correlated, it is acceptable to detect them up to a given spatial uncertainty.

1 code implementation • 4 May 2021 • Quentin Bertrand, Quentin Klopfenstein, Mathurin Massias, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques.

no code implementations • NeurIPS 2020 • Jerome-Alexis Chevalier, Joseph Salmon, Alexandre Gramfort, Bertrand Thirion

To deal with this, we adapt the desparsified Lasso estimator ---an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions--- to temporal data corrupted with autocorrelated noise.

no code implementations • 22 Oct 2020 • Quentin Klopfenstein, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon, Samuel Vaiter

For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identification (e. g. support identification for the Lasso) after a finite number of iterations, provided the objective function is regular enough.

1 code implementation • 29 Sep 2020 • Jérôme-Alexis Chevalier, Alexandre Gramfort, Joseph Salmon, Bertrand Thirion

To deal with this, we adapt the desparsified Lasso estimator -- an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions -- to temporal data corrupted with autocorrelated noise.

no code implementations • 6 Sep 2020 • Eugene Ndiaye, Olivier Fercoq, Joseph Salmon

Screening rules were recently introduced as a technique for explicitly identifying active structures such as sparsity, in optimization problem arising in machine learning.

no code implementations • 24 Jun 2020 • Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Joseph Salmon

Owing to their statistical properties, non-convex sparse regularizers have attracted much interest for estimating a sparse linear model from high dimensional data.

1 code implementation • ICML 2020 • Quentin Bertrand, Quentin Klopfenstein, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.

no code implementations • 15 Jan 2020 • Mathurin Massias, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon

In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.

no code implementations • 22 Oct 2019 • Charles-Alban Deledalle, Nicolas Papadakis, Joseph Salmon, Samuel Vaiter

This is done through the use of refitting block penalties that only act on the support of the estimated solution.

1 code implementation • 12 Jul 2019 • Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables.

no code implementations • 16 Feb 2019 • Alain Rakotomamonjy, Gilles Gasso, Joseph Salmon

Leveraging on the convexity of the Lasso problem , screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process.

1 code implementation • NeurIPS 2019 • Quentin Bertrand, Mathurin Massias, Alexandre Gramfort, Joseph Salmon

Sparsity promoting norms are frequently used in high dimensional regression.

2 code implementations • 31 Jan 2019 • Nidham Gazagnadou, Robert M. Gower, Joseph Salmon

Using these bounds, and since the SAGA algorithm is part of this JacSketch family, we suggest a new standard practice for setting the step sizes and mini-batch size for SAGA that are competitive with a numerical grid search.

1 code implementation • 12 Oct 2018 • Eugene Ndiaye, Tam Le, Olivier Fercoq, Joseph Salmon, Ichiro Takeuchi

Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task.

1 code implementation • ICML 2018 • Mathurin Massias, Alexandre Gramfort, Joseph Salmon

Here, we propose an extrapolation technique starting from a sequence of iterates in the dual that leads to the construction of improved dual points.

no code implementations • 27 May 2017 • Mathurin Massias, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

Results on multimodal neuroimaging problems with M/EEG data are also reported.

1 code implementation • 21 Mar 2017 • Mathurin Massias, Alexandre Gramfort, Joseph Salmon

For the Lasso estimator a WS is a set of features, while for a Group Lasso it refers to a set of groups.

no code implementations • 14 Mar 2017 • Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Joseph Salmon

The modern multi-label problems are typically large-scale in terms of number of observations, features and labels, and the amount of labels can even be comparable with the amount of observations.

no code implementations • 8 Dec 2016 • Charles-Alban Deledalle, Nicolas Papadakis, Joseph Salmon, Samuel Vaiter

Though, it is of importance when tuning the regularization parameter as it allows fixing an upper-bound on the grid for which the optimal parameter is sought.

1 code implementation • NeurIPS 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

For statistical learning in high dimension, sparse regularizations have proven useful to boost both computational and statistical efficiency.

1 code implementation • 17 Nov 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term.

no code implementations • 8 Jun 2016 • Igor Colin, Aurélien Bellet, Joseph Salmon, Stéphan Clémençon

In decentralized networks (of sensors, connected objects, etc.

1 code implementation • 8 Jun 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Vincent Leclère, Joseph Salmon

In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance.

1 code implementation • 19 Feb 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

We adapt to the case of Sparse-Group Lasso recent safe screening rules that discard early in the solver irrelevant features/groups.

no code implementations • NeurIPS 2015 • Igor Colin, Aurélien Bellet, Joseph Salmon, Stéphan Clémençon

Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems.

no code implementations • NeurIPS 2015 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules.

no code implementations • 13 May 2015 • Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In this paper, we propose new versions of the so-called $\textit{safe rules}$ for the Lasso.

no code implementations • 26 Aug 2014 • Olga Klopp, Jean Lafond, Eric Moulines, Joseph Salmon

The task of estimating a matrix given a sample of observed entries is known as the \emph{matrix completion problem}.

no code implementations • 16 Apr 2013 • Arnak S. Dalalyan, Mohamed Hebiri, Katia Méziani, Joseph Salmon

Popular sparse estimation methods based on $\ell_1$-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter.

no code implementations • 2 Jun 2012 • Joseph Salmon, Zachary Harmany, Charles-Alban Deledalle, Rebecca Willett

Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.