Search Results for author: Joseph Salmon

Found 32 papers, 15 papers with code

Score-Based Change Detection for Gradient-Based Learning Machines

1 code implementation27 Jun 2021 Lang Liu, Joseph Salmon, Zaid Harchaoui

The widespread use of machine learning algorithms calls for automatic change detection algorithms to monitor their behavior over time.

Spatially relaxed inference on high-dimensional linear models

1 code implementation4 Jun 2021 Jérôme-Alexis Chevalier, Tuan-Binh Nguyen, Bertrand Thirion, Joseph Salmon

This calls for a reformulation of the statistical inference problem, that takes into account the underlying spatial structure: if covariates are locally correlated, it is acceptable to detect them up to a given spatial uncertainty.

Statistical control for spatio-temporal MEG/EEG source imaging with desparsified mutli-task Lasso

no code implementations NeurIPS 2020 Jerome-Alexis Chevalier, Joseph Salmon, Alexandre Gramfort, Bertrand Thirion

To deal with this, we adapt the desparsified Lasso estimator ---an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions--- to temporal data corrupted with autocorrelated noise.

EEG

Model identification and local linear convergence of coordinate descent

no code implementations22 Oct 2020 Quentin Klopfenstein, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon, Samuel Vaiter

For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identification (e. g. support identification for the Lasso) after a finite number of iterations, provided the objective function is regular enough.

Statistical control for spatio-temporal MEG/EEG source imaging with desparsified multi-task Lasso

1 code implementation29 Sep 2020 Jérôme-Alexis Chevalier, Alexandre Gramfort, Joseph Salmon, Bertrand Thirion

To deal with this, we adapt the desparsified Lasso estimator -- an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions -- to temporal data corrupted with autocorrelated noise.

EEG

Screening Rules and its Complexity for Active Set Identification

no code implementations6 Sep 2020 Eugene Ndiaye, Olivier Fercoq, Joseph Salmon

Screening rules were recently introduced as a technique for explicitly identifying active structures such as sparsity, in optimization problem arising in machine learning.

Dimensionality Reduction

Provably Convergent Working Set Algorithm for Non-Convex Regularized Regression

no code implementations24 Jun 2020 Alain Rakotomamonjy, Fireworks Bcd, Maxvc Bcd, Alain Criteo, A Lab, Rémi Flamary, Gilles Gasso, Joseph Salmon

Owing to their statistical properties, non-convex sparse regularizers have attracted much interest for estimating a sparse linear model from high dimensional data.

Support recovery and sup-norm convergence rates for sparse pivotal estimation

no code implementations15 Jan 2020 Mathurin Massias, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon

In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.

Block based refitting in $\ell_{12}$ sparse regularisation

no code implementations22 Oct 2019 Charles-Alban Deledalle, Nicolas Papadakis, Joseph Salmon, Samuel Vaiter

This is done through the use of refitting block penalties that only act on the support of the estimated solution.

Image Restoration

Dual Extrapolation for Sparse Generalized Linear Models

1 code implementation12 Jul 2019 Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables.

Screening Rules for Lasso with Non-Convex Sparse Regularizers

no code implementations16 Feb 2019 Alain Rakotomamonjy, Gilles Gasso, Joseph Salmon

Leveraging on the convexity of the Lasso problem , screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process.

Optimal mini-batch and step sizes for SAGA

2 code implementations31 Jan 2019 Nidham Gazagnadou, Robert M. Gower, Joseph Salmon

Using these bounds, and since the SAGA algorithm is part of this JacSketch family, we suggest a new standard practice for setting the step sizes and mini-batch size for SAGA that are competitive with a numerical grid search.

Safe Grid Search with Optimal Complexity

1 code implementation12 Oct 2018 Eugene Ndiaye, Tam Le, Olivier Fercoq, Joseph Salmon, Ichiro Takeuchi

Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task.

Celer: a Fast Solver for the Lasso with Dual Extrapolation

1 code implementation ICML 2018 Mathurin Massias, Alexandre Gramfort, Joseph Salmon

Here, we propose an extrapolation technique starting from a sequence of iterates in the dual that leads to the construction of improved dual points.

From safe screening rules to working sets for faster Lasso-type solvers

1 code implementation21 Mar 2017 Mathurin Massias, Alexandre Gramfort, Joseph Salmon

For the Lasso estimator a WS is a set of features, while for a Group Lasso it refers to a set of groups.

Sparse Learning

On the benefits of output sparsity for multi-label classification

no code implementations14 Mar 2017 Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Joseph Salmon

The modern multi-label problems are typically large-scale in terms of number of observations, features and labels, and the amount of labels can even be comparable with the amount of observations.

Classification General Classification +2

Characterizing the maximum parameter of the total-variation denoising through the pseudo-inverse of the divergence

no code implementations8 Dec 2016 Charles-Alban Deledalle, Nicolas Papadakis, Joseph Salmon, Samuel Vaiter

Though, it is of importance when tuning the regularization parameter as it allows fixing an upper-bound on the grid for which the optimal parameter is sought.

Denoising

GAP Safe Screening Rules for Sparse-Group Lasso

1 code implementation NeurIPS 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

For statistical learning in high dimension, sparse regularizations have proven useful to boost both computational and statistical efficiency.

Gap Safe screening rules for sparsity enforcing penalties

1 code implementation17 Nov 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term.

Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

1 code implementation8 Jun 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Vincent Leclère, Joseph Salmon

In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance.

GAP Safe Screening Rules for Sparse-Group-Lasso

1 code implementation19 Feb 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

We adapt to the case of Sparse-Group Lasso recent safe screening rules that discard early in the solver irrelevant features/groups.

Extending Gossip Algorithms to Distributed Estimation of U-Statistics

no code implementations NeurIPS 2015 Igor Colin, Aurélien Bellet, Joseph Salmon, Stéphan Clémençon

Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems.

GAP Safe screening rules for sparse multi-task and multi-class models

no code implementations NeurIPS 2015 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules.

Mind the duality gap: safer rules for the Lasso

no code implementations13 May 2015 Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In this paper, we propose new versions of the so-called $\textit{safe rules}$ for the Lasso.

Adaptive Multinomial Matrix Completion

no code implementations26 Aug 2014 Olga Klopp, Jean Lafond, Eric Moulines, Joseph Salmon

The task of estimating a matrix given a sample of observed entries is known as the \emph{matrix completion problem}.

Matrix Completion Multi-class Classification +1

Learning Heteroscedastic Models by Convex Programming under Group Sparsity

no code implementations16 Apr 2013 Arnak S. Dalalyan, Mohamed Hebiri, Katia Méziani, Joseph Salmon

Popular sparse estimation methods based on $\ell_1$-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter.

Time Series

Poisson noise reduction with non-local PCA

no code implementations2 Jun 2012 Joseph Salmon, Zachary Harmany, Charles-Alban Deledalle, Rebecca Willett

Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements.

Denoising Dictionary Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.