no code implementations • 19 May 2022 • Itay Evron, Edward Moroshko, Rachel Ward, Nati Srebro, Daniel Soudry
In specific settings, we highlight differences between forgetting and convergence to the offline solution as studied in those areas.
no code implementations • NeurIPS 2016 • Blake E. Woodworth, Nati Srebro
We provide tight upper and lower bounds on the complexity of minimizing the average of m convex functions using gradient and prox oracles of the component functions.
no code implementations • NeurIPS 2016 • Yanyao Shen, Qi-Xing Huang, Nati Srebro, Sujay Sanghavi
The algorithmic advancement of synchronizing maps is important in order to solve a wide range of practice problems with possible large-scale dataset.
no code implementations • NeurIPS 2011 • Elad Hazan, Tomer Koren, Nati Srebro
We present an optimization approach for linear SVMs based on a stochastic primal-dual approach, where the primal step is akin to an importance-weighted SGD, and the dual step is a stochastic update on the importance weights.
no code implementations • NeurIPS 2011 • Nati Srebro, Karthik Sridharan, Ambuj Tewari
We show that for a general class of convex online learning problems, Mirror Descent can always achieve a (nearly) optimal regret guarantee.
no code implementations • NeurIPS 2011 • Rina Foygel, Ohad Shamir, Nati Srebro, Ruslan R. Salakhutdinov
We provide rigorous guarantees on learning with the weighted trace-norm under arbitrary sampling distributions.
no code implementations • NeurIPS 2011 • Andrew Cotter, Ohad Shamir, Nati Srebro, Karthik Sridharan
Mini-batch algorithms have recently received significant attention as a way to speed-up stochastic convex optimization problems.