Search Results for author: Nati Srebro

Found 7 papers, 0 papers with code

How catastrophic can catastrophic forgetting be in linear regression?

no code implementations19 May 2022 Itay Evron, Edward Moroshko, Rachel Ward, Nati Srebro, Daniel Soudry

In specific settings, we highlight differences between forgetting and convergence to the offline solution as studied in those areas.

Continual Learning regression

Tight Complexity Bounds for Optimizing Composite Objectives

no code implementations NeurIPS 2016 Blake E. Woodworth, Nati Srebro

We provide tight upper and lower bounds on the complexity of minimizing the average of m convex functions using gradient and prox oracles of the component functions.

Normalized Spectral Map Synchronization

no code implementations NeurIPS 2016 Yanyao Shen, Qi-Xing Huang, Nati Srebro, Sujay Sanghavi

The algorithmic advancement of synchronizing maps is important in order to solve a wide range of practice problems with possible large-scale dataset.

Beating SGD: Learning SVMs in Sublinear Time

no code implementations NeurIPS 2011 Elad Hazan, Tomer Koren, Nati Srebro

We present an optimization approach for linear SVMs based on a stochastic primal-dual approach, where the primal step is akin to an importance-weighted SGD, and the dual step is a stochastic update on the importance weights.

On the Universality of Online Mirror Descent

no code implementations NeurIPS 2011 Nati Srebro, Karthik Sridharan, Ambuj Tewari

We show that for a general class of convex online learning problems, Mirror Descent can always achieve a (nearly) optimal regret guarantee.

Learning with the weighted trace-norm under arbitrary sampling distributions

no code implementations NeurIPS 2011 Rina Foygel, Ohad Shamir, Nati Srebro, Ruslan R. Salakhutdinov

We provide rigorous guarantees on learning with the weighted trace-norm under arbitrary sampling distributions.

Better Mini-Batch Algorithms via Accelerated Gradient Methods

no code implementations NeurIPS 2011 Andrew Cotter, Ohad Shamir, Nati Srebro, Karthik Sridharan

Mini-batch algorithms have recently received significant attention as a way to speed-up stochastic convex optimization problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.