Search Results for author: Matthew Faw

Found 3 papers, 1 papers with code

Beyond Uniform Smoothness: A Stopped Analysis of Adaptive SGD

no code implementations13 Feb 2023 Matthew Faw, Litu Rout, Constantine Caramanis, Sanjay Shakkottai

Despite the richness, an emerging line of works achieves the $\widetilde{\mathcal{O}}(\frac{1}{\sqrt{T}})$ rate of convergence when the noise of the stochastic gradients is deterministically and uniformly bounded.

The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance

no code implementations11 Feb 2022 Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel Ward

We study convergence rates of AdaGrad-Norm as an exemplar of adaptive stochastic gradient methods (SGD), where the step sizes change based on observed stochastic gradients, for minimizing non-convex, smooth objectives.

Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions

1 code implementation NeurIPS 2020 Matthew Faw, Rajat Sen, Karthikeyan Shanmugam, Constantine Caramanis, Sanjay Shakkottai

We consider a covariate shift problem where one has access to several different training datasets for the same learning problem and a small validation set which possibly differs from all the individual training distributions.

Cannot find the paper you are looking for? You can Submit a new open access paper.