Search Results for author: Michael B. Cohen

Found 6 papers, 0 papers with code

Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration

no code implementations12 Nov 2020 Michael B. Cohen, Aaron Sidford, Kevin Tian

We show that standard extragradient methods (i. e. mirror prox and dual extrapolation) recover optimal accelerated rates for first-order minimization of smooth convex functions.

regression

Sparsity, variance and curvature in multi-armed bandits

no code implementations3 Nov 2017 Sébastien Bubeck, Michael B. Cohen, Yuanzhi Li

In (online) learning theory the concepts of sparsity, variance and curvature are well-understood and are routinely used to obtain refined regret and generalization bounds.

Generalization Bounds Learning Theory +1

Input Sparsity Time Low-Rank Approximation via Ridge Leverage Score Sampling

no code implementations23 Nov 2015 Michael B. Cohen, Cameron Musco, Christopher Musco

Our method is based on a recursive sampling scheme for computing a representative subset of $A$'s columns, which is then used to find a low-rank approximation.

Optimal approximate matrix product in terms of stable rank

no code implementations8 Jul 2015 Michael B. Cohen, Jelani Nelson, David P. Woodruff

We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having $m = O(\tilde{r}/\varepsilon^2)$ rows.

Clustering Dimensionality Reduction +1

Dimensionality Reduction for k-Means Clustering and Low Rank Approximation

no code implementations24 Oct 2014 Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, Madalina Persu

We show how to approximate a data matrix $\mathbf{A}$ with a much smaller sketch $\mathbf{\tilde A}$ that can be used to solve a general class of constrained k-rank approximation problems to within $(1+\epsilon)$ error.

Clustering Dimensionality Reduction

Uniform Sampling for Matrix Approximation

no code implementations21 Aug 2014 Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, Aaron Sidford

In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.