1 code implementation • 29 May 2024 • Sophie Jaffard, Samuel Vaiter, Patricia Reynaud-Bouret
The present work aims at proving mathematically that a neural network inspired by biology can learn a classification task thanks to local transformations only.
no code implementations • 24 May 2024 • Franck Iutzeler, Edouard Pauwels, Samuel Vaiter
We investigate the behavior of the derivatives of the iterates of Stochastic Gradient Descent (SGD) with respect to that parameter and show that they are driven by an inexact SGD recursion on a different objective function, perturbed by the convergence of the original SGD.
no code implementations • 21 Apr 2023 • Matthieu Cordonnier, Nicolas Keriven, Nicolas Tremblay, Samuel Vaiter
We study the convergence of message passing graph neural networks on random graph models to their continuous counterpart as the number of nodes tends to infinity.
1 code implementation • 24 Mar 2023 • Hashem Ghanem, Samuel Vaiter, Nicolas Keriven
To alleviate this issue, we study several solutions: we propose to resort to latent graph learning using a Graph-to-Graph model (G2G), graph regularization to impose a prior structure on the graph, or optimizing on a larger graph than the original one with a reduced diameter.
no code implementations • 9 Mar 2023 • Rémi Catellier, Samuel Vaiter, Damien Garreau
A fundamental issue in machine learning is the robustness of the model with respect to changes in the input.
no code implementations • 17 Feb 2023 • Mathieu Dagréou, Thomas Moreau, Samuel Vaiter, Pierre Ablin
Bilevel optimization problems, which are problems where two optimization problems are nested, have more and more applications in machine learning.
no code implementations • 26 Jul 2022 • Edouard Pauwels, Samuel Vaiter
We show that the derivatives of the Sinkhorn-Knopp algorithm, or iterative proportional fitting procedure, converge towards the derivatives of the entropic regularization of the optimal transport problem with a locally uniform linear convergence rate.
3 code implementations • 27 Jun 2022 • Thomas Moreau, Mathurin Massias, Alexandre Gramfort, Pierre Ablin, Pierre-Antoine Bannier, Benjamin Charlier, Mathieu Dagréou, Tom Dupré La Tour, Ghislain Durif, Cassio F. Dantas, Quentin Klopfenstein, Johan Larsson, En Lai, Tanguy Lefort, Benoit Malézieux, Badr Moufad, Binh T. Nguyen, Alain Rakotomamonjy, Zaccharie Ramzi, Joseph Salmon, Samuel Vaiter
Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice.
no code implementations • 31 May 2022 • Jérôme Bolte, Edouard Pauwels, Samuel Vaiter
Is there a limiting object for nonsmooth piggyback automatic differentiation (AD)?
1 code implementation • 31 Jan 2022 • Mathieu Dagréou, Pierre Ablin, Samuel Vaiter, Thomas Moreau
However, computing the gradient of the value function involves solving a linear system, which makes it difficult to derive unbiased stochastic estimates.
no code implementations • 15 Dec 2021 • Hashem Ghanem, Joseph Salmon, Nicolas Keriven, Samuel Vaiter
In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error.
no code implementations • 7 Dec 2021 • Yann Traonmilin, Rémi Gribonval, Samuel Vaiter
To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint.
1 code implementation • NeurIPS 2021 • Nicolas Keriven, Alberto Bietti, Samuel Vaiter
In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs, which directly enables a study of their approximation power on random graph models.
1 code implementation • 4 May 2021 • Quentin Bertrand, Quentin Klopfenstein, Mathurin Massias, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon
Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques.
no code implementations • 22 Oct 2020 • Quentin Klopfenstein, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon, Samuel Vaiter
For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identification (e. g. support identification for the Lasso) after a finite number of iterations, provided the objective function is regular enough.
1 code implementation • NeurIPS 2020 • Nicolas Keriven, Alberto Bietti, Samuel Vaiter
We study properties of Graph Convolutional Networks (GCNs) by analyzing their behavior on standard models of random graphs, where nodes are represented by random latent variables and edges are drawn according to a similarity kernel.
no code implementations • 20 Apr 2020 • Barbara Pascal, Samuel Vaiter, Nelly Pustelnik, Patrice Abry
This work extends the Stein's Unbiased GrAdient estimator of the Risk of Deledalle et al. to the case of correlated Gaussian noise, deriving a general automatic tuning of regularization parameters.
1 code implementation • ICML 2020 • Quentin Bertrand, Quentin Klopfenstein, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
no code implementations • 7 Feb 2020 • Nicolas Keriven, Samuel Vaiter
Existing results show that, in the relatively sparse case where the expected degree grows logarithmically with the number of nodes, guarantees in the static case can be extended to the dynamic case and yield improved error bounds when the DSBM is sufficiently smooth in time, that is, the communities do not change too much between two time steps.
1 code implementation • 6 Nov 2019 • Quentin Klopfenstein, Samuel Vaiter
This paper studies the addition of linear constraints to the Support Vector Regression (SVR) when the kernel is linear.
no code implementations • 22 Oct 2019 • Charles-Alban Deledalle, Nicolas Papadakis, Joseph Salmon, Samuel Vaiter
This is done through the use of refitting block penalties that only act on the support of the estimated solution.
1 code implementation • 12 Jul 2019 • Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon
Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables.
no code implementations • 8 Dec 2016 • Charles-Alban Deledalle, Nicolas Papadakis, Joseph Salmon, Samuel Vaiter
Though, it is of importance when tuning the regularization parameter as it allows fixing an upper-bound on the grid for which the optimal parameter is sought.
no code implementations • 7 Jul 2014 • Samuel Vaiter, Gabriel Peyré, Jalal M. Fadili
Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it.
no code implementations • 5 May 2014 • Samuel Vaiter, Gabriel Peyré, Jalal M. Fadili
We show that a generalized "irrepresentable condition" implies stable model selection under small noise perturbations in the observations and the design matrix, when the regularization parameter is tuned proportionally to the noise level.