no code implementations • 30 Apr 2024 • Jérôme Bolte, Tam Le, Éric Moulines, Edouard Pauwels
Motivated by the widespread use of approximate derivatives in machine learning and optimization, we study inexact subgradient methods with non-vanishing additive errors and step sizes.
no code implementations • 15 Dec 2022 • Jérôme Bolte, Edouard Pauwels, Antonio José Silveti-Falls
We leverage path differentiability and a recent result on nonsmooth implicit differentiation calculus to give sufficient conditions ensuring that the solution to a monotone inclusion problem will be path differentiable, with formulas for computing its generalized gradient.
no code implementations • 1 Jun 2022 • Jérôme Bolte, Ryan Boustany, Edouard Pauwels, Béatrice Pesquet-Popescu
Using the notion of conservative gradient, we provide a simple model to estimate the computational costs of the backward and forward modes of algorithmic differentiation for a wide class of nonsmooth programs.
no code implementations • 31 May 2022 • Jérôme Bolte, Edouard Pauwels, Samuel Vaiter
Is there a limiting object for nonsmooth piggyback automatic differentiation (AD)?
1 code implementation • NeurIPS 2021 • David Bertoin, Jérôme Bolte, Sébastien Gerchinovitz, Edouard Pauwels
In theory, the choice of ReLU(0) in [0, 1] for a neural network has a negligible influence both on backpropagation and training.
no code implementations • NeurIPS 2021 • Jérôme Bolte, Tam Le, Edouard Pauwels, Antonio Silveti-Falls
In view of training increasingly complex learning architectures, we establish a nonsmooth implicit function theorem with an operational calculus.
1 code implementation • 5 Mar 2021 • Camille Castera, Jérôme Bolte, Cédric Févotte, Edouard Pauwels
In view of a direct and simple improvement of vanilla SGD, this paper presents a fine-tuning of its step-sizes in the mini-batch case.
no code implementations • 17 Jul 2020 • Jérôme Bolte, Lilian Glaudin, Edouard Pauwels, Mathieu Serrurier
We present a new algorithm to solve min-max or min-min problems out of the convex world.
no code implementations • 23 Sep 2019 • Jérôme Bolte, Edouard Pauwels
Modern problems in AI or in numerical analysis require nonsmooth approaches with a flexible calculus.
2 code implementations • 29 May 2019 • Camille Castera, Jérôme Bolte, Cédric Févotte, Edouard Pauwels
We prove the convergence of INNA for most deep learning problems.