no code implementations • 13 Jun 2024 • Can Pouliquen, Mathurin Massias, Titouan Vayer
However, designing effective neural architectures for SPD learning is challenging, particularly when the task requires additional structural constraints, such as element-wise sparsity.
no code implementations • 5 Jul 2023 • Can Pouliquen, Paulo Gonçalves, Mathurin Massias, Titouan Vayer
We provide a framework and algorithm for tuning the hyperparameters of the Graphical Lasso via a bilevel optimization problem solved with a first-order method.
2 code implementations • 26 Oct 2022 • Johan Larsson, Quentin Klopfenstein, Mathurin Massias, Jonas Wallin
The lasso is the most famous sparse regression and feature selection method.
3 code implementations • 27 Jun 2022 • Thomas Moreau, Mathurin Massias, Alexandre Gramfort, Pierre Ablin, Pierre-Antoine Bannier, Benjamin Charlier, Mathieu Dagréou, Tom Dupré La Tour, Ghislain Durif, Cassio F. Dantas, Quentin Klopfenstein, Johan Larsson, En Lai, Tanguy Lefort, Benoit Malézieux, Badr Moufad, Binh T. Nguyen, Alain Rakotomamonjy, Zaccharie Ramzi, Joseph Salmon, Samuel Vaiter
Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice.
2 code implementations • 16 Apr 2022 • Quentin Bertrand, Quentin Klopfenstein, Pierre-Antoine Bannier, Gauthier Gidel, Mathurin Massias
We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties.
no code implementations • 1 Feb 2022 • Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa
Our approach is based on a primal-dual algorithm of which we analyze convergence and stability properties, even in the case where the original problem is unfeasible.
1 code implementation • 4 May 2021 • Quentin Bertrand, Quentin Klopfenstein, Mathurin Massias, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon
Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques.
no code implementations • 19 Nov 2020 • Quentin Bertrand, Mathurin Massias
Acceleration of first order methods is mainly obtained via inertial techniques \`a la Nesterov, or via nonlinear extrapolation.
1 code implementation • 17 Jun 2020 • Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa
We study iterative regularization for linear models, when the bias is convex but not necessarily strongly convex.
no code implementations • 29 Feb 2020 • Boris Muzellec, Kanji Sato, Mathurin Massias, Taiji Suzuki
In this work, we provide a convergence analysis of GLD and SGLD when the optimization space is an infinite dimensional Hilbert space.
no code implementations • 15 Jan 2020 • Mathurin Massias, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
1 code implementation • 12 Jul 2019 • Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon
Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables.
1 code implementation • NeurIPS 2019 • Pierre Ablin, Thomas Moreau, Mathurin Massias, Alexandre Gramfort
We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes.
1 code implementation • NeurIPS 2019 • Quentin Bertrand, Mathurin Massias, Alexandre Gramfort, Joseph Salmon
Sparsity promoting norms are frequently used in high dimensional regression.
1 code implementation • ICML 2018 • Mathurin Massias, Alexandre Gramfort, Joseph Salmon
Here, we propose an extrapolation technique starting from a sequence of iterates in the dual that leads to the construction of improved dual points.
1 code implementation • 27 May 2017 • Mathurin Massias, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
Results on multimodal neuroimaging problems with M/EEG data are also reported.
1 code implementation • 21 Mar 2017 • Mathurin Massias, Alexandre Gramfort, Joseph Salmon
For the Lasso estimator a WS is a set of features, while for a Group Lasso it refers to a set of groups.