Search Results for author: Joseph Salmon

Found 40 papers, 19 papers with code

Celer: a Fast Solver for the Lasso with Dual Extrapolation

1 code implementation ICML 2018 Mathurin Massias, Alexandre Gramfort, Joseph Salmon

Here, we propose an extrapolation technique starting from a sequence of iterates in the dual that leads to the construction of improved dual points.

Dual Extrapolation for Sparse Generalized Linear Models

1 code implementation12 Jul 2019 Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables.

Optimal mini-batch and step sizes for SAGA

2 code implementations31 Jan 2019 Nidham Gazagnadou, Robert M. Gower, Joseph Salmon

Using these bounds, and since the SAGA algorithm is part of this JacSketch family, we suggest a new standard practice for setting the step sizes and mini-batch size for SAGA that are competitive with a numerical grid search.

Statistical control for spatio-temporal MEG/EEG source imaging with desparsified multi-task Lasso

1 code implementation29 Sep 2020 Jérôme-Alexis Chevalier, Alexandre Gramfort, Joseph Salmon, Bertrand Thirion

To deal with this, we adapt the desparsified Lasso estimator -- an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions -- to temporal data corrupted with autocorrelated noise.

Constrained Clustering EEG +2

Spatially relaxed inference on high-dimensional linear models

1 code implementation4 Jun 2021 Jérôme-Alexis Chevalier, Tuan-Binh Nguyen, Bertrand Thirion, Joseph Salmon

This calls for a reformulation of the statistical inference problem, that takes into account the underlying spatial structure: if covariates are locally correlated, it is acceptable to detect them up to a given spatial uncertainty.

Constrained Clustering Vocal Bursts Intensity Prediction

GAP Safe Screening Rules for Sparse-Group-Lasso

1 code implementation19 Feb 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

We adapt to the case of Sparse-Group Lasso recent safe screening rules that discard early in the solver irrelevant features/groups.

GAP Safe Screening Rules for Sparse-Group Lasso

1 code implementation NeurIPS 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

For statistical learning in high dimension, sparse regularizations have proven useful to boost both computational and statistical efficiency.

Stochastic smoothing of the top-K calibrated hinge loss for deep imbalanced classification

1 code implementation4 Feb 2022 Camille Garcin, Maximilien Servajean, Alexis Joly, Joseph Salmon

In modern classification tasks, the number of labels is getting larger and larger, as is the size of the datasets encountered in practice.

imbalanced classification

LassoBench: A High-Dimensional Hyperparameter Optimization Benchmark Suite for Lasso

1 code implementation4 Nov 2021 Kenan Šehić, Alexandre Gramfort, Joseph Salmon, Luigi Nardi

While Weighted Lasso sparse regression has appealing statistical guarantees that would entail a major real-world impact in finance, genomics, and brain imaging applications, it is typically scarcely adopted due to its complex high-dimensional space composed by thousands of hyperparameters.

Bayesian Optimization Hyperparameter Optimization +2

Safe Grid Search with Optimal Complexity

1 code implementation12 Oct 2018 Eugene Ndiaye, Tam Le, Olivier Fercoq, Joseph Salmon, Ichiro Takeuchi

Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task.

Gap Safe screening rules for sparsity enforcing penalties

1 code implementation17 Nov 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term.

regression

Score-Based Change Detection for Gradient-Based Learning Machines

1 code implementation27 Jun 2021 Lang Liu, Joseph Salmon, Zaid Harchaoui

The widespread use of machine learning algorithms calls for automatic change detection algorithms to monitor their behavior over time.

BIG-bench Machine Learning Change Detection

Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

2 code implementations8 Jun 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Vincent Leclère, Joseph Salmon

In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance.

regression Uncertainty Quantification +1

From safe screening rules to working sets for faster Lasso-type solvers

1 code implementation21 Mar 2017 Mathurin Massias, Alexandre Gramfort, Joseph Salmon

For the Lasso estimator a WS is a set of features, while for a Group Lasso it refers to a set of groups.

Sparse Learning

On the benefits of output sparsity for multi-label classification

no code implementations14 Mar 2017 Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Joseph Salmon

The modern multi-label problems are typically large-scale in terms of number of observations, features and labels, and the amount of labels can even be comparable with the amount of observations.

Classification General Classification +2

Characterizing the maximum parameter of the total-variation denoising through the pseudo-inverse of the divergence

no code implementations8 Dec 2016 Charles-Alban Deledalle, Nicolas Papadakis, Joseph Salmon, Samuel Vaiter

Though, it is of importance when tuning the regularization parameter as it allows fixing an upper-bound on the grid for which the optimal parameter is sought.

Denoising

Mind the duality gap: safer rules for the Lasso

no code implementations13 May 2015 Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In this paper, we propose new versions of the so-called $\textit{safe rules}$ for the Lasso.

GAP Safe screening rules for sparse multi-task and multi-class models

no code implementations NeurIPS 2015 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules.

regression

Extending Gossip Algorithms to Distributed Estimation of U-Statistics

no code implementations NeurIPS 2015 Igor Colin, Aurélien Bellet, Joseph Salmon, Stéphan Clémençon

Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems.

Adaptive Multinomial Matrix Completion

no code implementations26 Aug 2014 Olga Klopp, Jean Lafond, Eric Moulines, Joseph Salmon

The task of estimating a matrix given a sample of observed entries is known as the \emph{matrix completion problem}.

Matrix Completion Multi-class Classification +1

Poisson noise reduction with non-local PCA

no code implementations2 Jun 2012 Joseph Salmon, Zachary Harmany, Charles-Alban Deledalle, Rebecca Willett

Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements.

Astronomy Denoising +1

Learning Heteroscedastic Models by Convex Programming under Group Sparsity

no code implementations16 Apr 2013 Arnak S. Dalalyan, Mohamed Hebiri, Katia Méziani, Joseph Salmon

Popular sparse estimation methods based on $\ell_1$-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter.

Time Series Time Series Analysis

Screening Rules for Lasso with Non-Convex Sparse Regularizers

no code implementations16 Feb 2019 Alain Rakotomamonjy, Gilles Gasso, Joseph Salmon

Leveraging on the convexity of the Lasso problem , screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process.

Support recovery and sup-norm convergence rates for sparse pivotal estimation

no code implementations15 Jan 2020 Mathurin Massias, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon

In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.

regression

Provably Convergent Working Set Algorithm for Non-Convex Regularized Regression

no code implementations24 Jun 2020 Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Joseph Salmon

Owing to their statistical properties, non-convex sparse regularizers have attracted much interest for estimating a sparse linear model from high dimensional data.

regression

Screening Rules and its Complexity for Active Set Identification

no code implementations6 Sep 2020 Eugene Ndiaye, Olivier Fercoq, Joseph Salmon

Screening rules were recently introduced as a technique for explicitly identifying active structures such as sparsity, in optimization problem arising in machine learning.

BIG-bench Machine Learning Dimensionality Reduction

Model identification and local linear convergence of coordinate descent

no code implementations22 Oct 2020 Quentin Klopfenstein, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon, Samuel Vaiter

For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identification (e. g. support identification for the Lasso) after a finite number of iterations, provided the objective function is regular enough.

Statistical control for spatio-temporal MEG/EEG source imaging with desparsified mutli-task Lasso

no code implementations NeurIPS 2020 Jerome-Alexis Chevalier, Joseph Salmon, Alexandre Gramfort, Bertrand Thirion

To deal with this, we adapt the desparsified Lasso estimator ---an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions--- to temporal data corrupted with autocorrelated noise.

Constrained Clustering EEG +2

Block based refitting in $\ell_{12}$ sparse regularisation

no code implementations22 Oct 2019 Charles-Alban Deledalle, Nicolas Papadakis, Joseph Salmon, Samuel Vaiter

This is done through the use of refitting block penalties that only act on the support of the estimated solution.

Image Restoration

Differentially Private Coordinate Descent for Composite Empirical Risk Minimization

no code implementations22 Oct 2021 Paul Mangold, Aurélien Bellet, Joseph Salmon, Marc Tommasi

In this paper, we propose Differentially Private proximal Coordinate Descent (DP-CD), a new method to solve composite DP-ERM problems.

Supervised learning of analysis-sparsity priors with automatic differentiation

no code implementations15 Dec 2021 Hashem Ghanem, Joseph Salmon, Nicolas Keriven, Samuel Vaiter

In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error.

Denoising Image Reconstruction

Identify ambiguous tasks combining crowdsourced labels by weighting Areas Under the Margin

no code implementations30 Sep 2022 Tanguy Lefort, Benjamin Charlier, Alexis Joly, Joseph Salmon

We adapt the AUM to identify ambiguous tasks in crowdsourced learning scenarios, introducing the Weighted Areas Under the Margin (WAUM).

Image Classification

A two-head loss function for deep Average-K classification

no code implementations31 Mar 2023 Camille Garcin, Maximilien Servajean, Alexis Joly, Joseph Salmon

Average-K classification is an alternative to top-K classification in which the number of labels returned varies with the ambiguity of the input image but must average to K over all the samples.

Classification Multi-Label Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.