1 code implementation • 5 Jan 2024 • Ryan Boustany
Using SGD for the training process, we study the impact of different choices of the nonsmooth Jacobian for the MaxPool function on the precision of 16 and 32 bits.
no code implementations • 1 Jun 2022 • Jérôme Bolte, Ryan Boustany, Edouard Pauwels, Béatrice Pesquet-Popescu
Using the notion of conservative gradient, we provide a simple model to estimate the computational costs of the backward and forward modes of algorithmic differentiation for a wide class of nonsmooth programs.