no code implementations • ICML 2020 • Fabian Pedregosa, Damien Scieur
We develop a framework for designing optimal optimization methods in terms of their average-case runtime.
no code implementations • ICML 2020 • Samy Jelassi, Carles Domingo-Enrich, Damien Scieur, Arthur Mensch, Joan Bruna
Data-driven modeling increasingly requires to find a Nash equilibrium in multi-player games, e. g. when training GANs.
no code implementations • ICML 2020 • Damien Scieur, Fabian Pedregosa
We consider the average-case runtime analysis of algorithms for minimizing quadratic objectives.
1 code implementation • 25 May 2023 • Adrien Courtois, Damien Scieur, Jean-Michel Morel, Pablo Arias, Thomas Eboli
We propose SING (StabIlized and Normalized Gradient), a plug-and-play technique that improves the stability and generalization of the Adam(W) optimizer.
no code implementations • 27 Sep 2022 • Damien Scieur, Quentin Bertrand, Gauthier Gidel, Fabian Pedregosa
Computing the Jacobian of the solution of an optimization problem is a central problem in machine learning, with applications in hyperparameter optimization, meta-learning, optimization as a layer, and dataset distillation, to name a few.
no code implementations • 20 Jun 2022 • Leonardo Cunha, Gauthier Gidel, Fabian Pedregosa, Damien Scieur, Courtney Paquette
The recently developed average-case analysis of optimization methods allows a more fine-grained and representative convergence analysis than usual worst-case results.
no code implementations • 12 Nov 2021 • Rémi Le Priol, Frederik Kunstner, Damien Scieur, Simon Lacoste-Julien
We consider the problem of upper bounding the expected log-likelihood sub-optimality of the maximum likelihood estimate (MLE), or a conjugate maximum a posteriori (MAP) for an exponential family, in a non-asymptotic way.
no code implementations • 25 Jun 2021 • Damien Scieur, Youngsung Kim
This paper considers classification problems with hierarchically organized classes.
1 code implementation • 23 Jan 2021 • Alexandre d'Aspremont, Damien Scieur, Adrien Taylor
This monograph covers some recent advances in a range of acceleration techniques frequently used in convex optimization.
no code implementations • 6 Nov 2020 • Damien Scieur, Lewis Liu, Thomas Pumir, Nicolas Boumal
Quasi-Newton techniques approximate the Newton step by estimating the Hessian using the so-called secant equations.
Optimization and Control Numerical Analysis Numerical Analysis
no code implementations • ICLR 2021 • Carles Domingo-Enrich, Fabian Pedregosa, Damien Scieur
First, we show that for zero-sum bilinear games the average-case optimal method is the optimal method for the minimization of the Hamiltonian.
no code implementations • 12 Feb 2020 • Fabian Pedregosa, Damien Scieur
We develop a framework for the average-case analysis of random quadratic problems and derive algorithms that are optimal under this analysis.
no code implementations • 2 Jan 2020 • Waïss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon Lacoste-Julien, Gauthier Gidel
Using this perspective, we propose an optimal algorithm for bilinear games.
1 code implementation • 29 May 2019 • Carles Domingo Enrich, Samy Jelassi, Carles Domingo-Enrich, Damien Scieur, Arthur Mensch, Joan Bruna
Data-driven modeling increasingly requires to find a Nash equilibrium in multi-player games, e. g. when training GANs.
1 code implementation • 1 Jun 2018 • Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach
The Regularized Nonlinear Acceleration (RNA) algorithm is an acceleration method capable of improving the rate of convergence of many optimization schemes such as gradient descend, SAGA or SVRG.
no code implementations • 24 May 2018 • Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach
Regularized nonlinear acceleration (RNA) estimates the minimum of a function by post-processing iterates from an algorithm such as the gradient method.
no code implementations • NeurIPS 2017 • Damien Scieur, Vincent Roulet, Francis Bach, Alexandre d'Aspremont
We show that accelerated optimization methods can be seen as particular instances of multi-step integration schemes from numerical analysis, applied to the gradient flow equation.
no code implementations • NeurIPS 2017 • Damien Scieur, Francis Bach, Alexandre d'Aspremont
Here, we study extrapolation methods in a stochastic setting, where the iterates are produced by either a simple or an accelerated stochastic gradient algorithm.