Search Results for author: Oliver Hinder

Found 12 papers, 6 papers with code

Accelerated Parameter-Free Stochastic Optimization

no code implementations31 Mar 2024 Itai Kreisler, Maor Ivgi, Oliver Hinder, Yair Carmon

We propose a method that achieves near-optimal rates for smooth stochastic convex optimization and requires essentially no prior knowledge of problem parameters.

Stochastic Optimization

The Price of Adaptivity in Stochastic Convex Optimization

no code implementations16 Feb 2024 Yair Carmon, Oliver Hinder

We prove impossibility results for adaptivity in non-smooth stochastic convex optimization.

DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule

1 code implementation8 Feb 2023 Maor Ivgi, Oliver Hinder, Yair Carmon

Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG's performance is close to that of SGD with tuned learning rate.

Transfer Learning

Optimal Diagonal Preconditioning

1 code implementation2 Sep 2022 Zhaonan Qu, Wenzhi Gao, Oliver Hinder, Yinyu Ye, Zhengyuan Zhou

Moreover, our implementation of customized solvers, combined with a random row/column sampling step, can find near-optimal diagonal preconditioners for matrices up to size 200, 000 in reasonable time, demonstrating their practical appeal.

Making SGD Parameter-Free

no code implementations4 May 2022 Yair Carmon, Oliver Hinder

We develop an algorithm for parameter-free stochastic convex optimization (SCO) whose rate of convergence is only a double-logarithmic factor larger than the optimal rate for the corresponding known-parameter setting.

An efficient nonconvex reformulation of stagewise convex optimization problems

no code implementations NeurIPS 2020 Rudy Bunel, Oliver Hinder, Srinadh Bhojanapalli, Krishnamurthy, Dvijotham

We establish theoretical properties of the nonconvex formulation, showing that it is (almost) free of spurious local minima and has the same global optimum as the convex problem.

A generic adaptive restart scheme with applications to saddle point algorithms

1 code implementation15 Jun 2020 Oliver Hinder, Miles Lubin

We provide a simple and generic adaptive restart scheme for convex optimization that is able to achieve worst-case bounds matching (up to constant multiplicative factors) optimal restart schemes that require knowledge of problem specific constants.

Optimization and Control

Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond

1 code implementation27 Jun 2019 Oliver Hinder, Aaron Sidford, Nimit S. Sohoni

This function class, which we call the class of smooth quasar-convex functions, is parameterized by a constant $\gamma \in (0, 1]$, where $\gamma = 1$ encompasses the classes of smooth convex and star-convex functions, and smaller values of $\gamma$ indicate that the function can be "more nonconvex."

“Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions

no code implementations ICML 2017 Yair Carmon, John C. Duchi, Oliver Hinder, Aaron Sidford

We develop and analyze a variant of Nesterov’s accelerated gradient descent (AGD) for minimization of smooth non-convex functions.

On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods

1 code implementation23 Jul 2017 Gabriel Haeser, Oliver Hinder, Yinyu Ye

Alternatively, in the convex case, if the primal feasibility is reduced too fast and the set of Lagrange multipliers is unbounded, then the Lagrange multiplier sequence generated will be unbounded.

Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.