Search Results for author: Mathurin Massias

Found 16 papers, 11 papers with code

Implicit Differentiation for Hyperparameter Tuning the Weighted Graphical Lasso

no code implementations5 Jul 2023 Can Pouliquen, Paulo Gonçalves, Mathurin Massias, Titouan Vayer

We provide a framework and algorithm for tuning the hyperparameters of the Graphical Lasso via a bilevel optimization problem solved with a first-order method.

Bilevel Optimization

Coordinate Descent for SLOPE

1 code implementation26 Oct 2022 Johan Larsson, Quentin Klopfenstein, Mathurin Massias, Jonas Wallin

The lasso is the most famous sparse regression and feature selection method.

feature selection

Beyond L1: Faster and Better Sparse Models with skglm

2 code implementations16 Apr 2022 Quentin Bertrand, Quentin Klopfenstein, Pierre-Antoine Bannier, Gauthier Gidel, Mathurin Massias

We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties.

Iterative regularization for low complexity regularizers

no code implementations1 Feb 2022 Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa

Our approach is based on a primal-dual algorithm of which we analyze convergence and stability properties, even in the case where the original problem is unfeasible.

Anderson acceleration of coordinate descent

no code implementations19 Nov 2020 Quentin Bertrand, Mathurin Massias

Acceleration of first order methods is mainly obtained via inertial techniques \`a la Nesterov, or via nonlinear extrapolation.

regression

Iterative regularization for convex regularizers

1 code implementation17 Jun 2020 Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa

We study iterative regularization for linear models, when the bias is convex but not necessarily strongly convex.

Dimension-free convergence rates for gradient Langevin dynamics in RKHS

no code implementations29 Feb 2020 Boris Muzellec, Kanji Sato, Mathurin Massias, Taiji Suzuki

In this work, we provide a convergence analysis of GLD and SGLD when the optimization space is an infinite dimensional Hilbert space.

Support recovery and sup-norm convergence rates for sparse pivotal estimation

no code implementations15 Jan 2020 Mathurin Massias, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon

In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.

regression

Dual Extrapolation for Sparse Generalized Linear Models

1 code implementation12 Jul 2019 Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables.

Learning step sizes for unfolded sparse coding

1 code implementation NeurIPS 2019 Pierre Ablin, Thomas Moreau, Mathurin Massias, Alexandre Gramfort

We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes.

Celer: a Fast Solver for the Lasso with Dual Extrapolation

1 code implementation ICML 2018 Mathurin Massias, Alexandre Gramfort, Joseph Salmon

Here, we propose an extrapolation technique starting from a sequence of iterates in the dual that leads to the construction of improved dual points.

From safe screening rules to working sets for faster Lasso-type solvers

1 code implementation21 Mar 2017 Mathurin Massias, Alexandre Gramfort, Joseph Salmon

For the Lasso estimator a WS is a set of features, while for a Group Lasso it refers to a set of groups.

Sparse Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.