no code implementations • 31 Mar 2024 • Itai Kreisler, Maor Ivgi, Oliver Hinder, Yair Carmon
We propose a method that achieves near-optimal rates for smooth stochastic convex optimization and requires essentially no prior knowledge of problem parameters.
no code implementations • 16 Feb 2024 • Yair Carmon, Oliver Hinder
We prove impossibility results for adaptivity in non-smooth stochastic convex optimization.
1 code implementation • NeurIPS 2023 • Jungtaek Kim, Mingxuan Li, Oliver Hinder, Paul W. Leu
To design and understand these nanophotonic structures, electrodynamic simulations are essential.
1 code implementation • 8 Feb 2023 • Maor Ivgi, Oliver Hinder, Yair Carmon
Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG's performance is close to that of SGD with tuned learning rate.
1 code implementation • 2 Sep 2022 • Zhaonan Qu, Wenzhi Gao, Oliver Hinder, Yinyu Ye, Zhengyuan Zhou
Moreover, our implementation of customized solvers, combined with a random row/column sampling step, can find near-optimal diagonal preconditioners for matrices up to size 200, 000 in reasonable time, demonstrating their practical appeal.
no code implementations • 4 May 2022 • Yair Carmon, Oliver Hinder
We develop an algorithm for parameter-free stochastic convex optimization (SCO) whose rate of convergence is only a double-logarithmic factor larger than the optimal rate for the corresponding known-parameter setting.
no code implementations • NeurIPS 2020 • John C. Duchi, Oliver Hinder, Andrew Naber, Yinyu Ye
We present an extension of the conditional gradient method to problems whose feasible sets are convex cones.
no code implementations • NeurIPS 2020 • Rudy Bunel, Oliver Hinder, Srinadh Bhojanapalli, Krishnamurthy, Dvijotham
We establish theoretical properties of the nonconvex formulation, showing that it is (almost) free of spurious local minima and has the same global optimum as the convex problem.
1 code implementation • 15 Jun 2020 • Oliver Hinder, Miles Lubin
We provide a simple and generic adaptive restart scheme for convex optimization that is able to achieve worst-case bounds matching (up to constant multiplicative factors) optimal restart schemes that require knowledge of problem specific constants.
Optimization and Control
1 code implementation • 27 Jun 2019 • Oliver Hinder, Aaron Sidford, Nimit S. Sohoni
This function class, which we call the class of smooth quasar-convex functions, is parameterized by a constant $\gamma \in (0, 1]$, where $\gamma = 1$ encompasses the classes of smooth convex and star-convex functions, and smaller values of $\gamma$ indicate that the function can be "more nonconvex."
no code implementations • ICML 2017 • Yair Carmon, John C. Duchi, Oliver Hinder, Aaron Sidford
We develop and analyze a variant of Nesterov’s accelerated gradient descent (AGD) for minimization of smooth non-convex functions.
1 code implementation • 23 Jul 2017 • Gabriel Haeser, Oliver Hinder, Yinyu Ye
Alternatively, in the convex case, if the primal feasibility is reduced too fast and the set of Lagrange multipliers is unbounded, then the Lagrange multiplier sequence generated will be unbounded.
Optimization and Control