no code implementations • 22 Jul 2023 • Keyi Chen, Francesco Orabona
Due to its speed and simplicity, subgradient descent is one of the most used optimization algorithms in convex machine learning algorithms.
no code implementations • 31 May 2023 • Keyi Chen, Francesco Orabona
We propose a new class of online learning algorithms, generalized implicit Follow-The-Regularized-Leader (FTRL), that expands the scope of FTRL framework.
no code implementations • 19 Mar 2022 • Keyi Chen, Ashok Cutkosky, Francesco Orabona
Parameter-free algorithms are online learning algorithms that do not require setting learning rates.
no code implementations • 12 Jun 2020 • Keyi Chen, John Langford, Francesco Orabona
Parameter-free stochastic gradient descent (PFSGD) algorithms do not require setting learning rates while achieving optimal theoretical performance.