no code implementations • 14 Feb 2022 • Jingqiu Ding, Tommaso d'Orsi, Chih-Hung Liu, Stefan Tiegel, David Steurer
We develop the first fast spectral algorithm to decompose a random third-order tensor over $\mathbb{R}^d$ of rank up to $O(d^{3/2}/\text{polylog}(d))$.
no code implementations • 10 Dec 2021 • Rares-Darius Buhai, David Steurer
The reason is that such outliers can simulate exponentially small mixing weights even for mixtures with polynomially lower bounded mixing weights.
no code implementations • 16 Nov 2021 • Jingqiu Ding, Tommaso d'Orsi, Rajai Nasser, David Steurer
We develop an efficient algorithm for weak recovery in a robust version of the stochastic block model.
no code implementations • NeurIPS 2021 • Tommaso d'Orsi, Chih-Hung Liu, Rajai Nasser, Gleb Novikov, David Steurer, Stefan Tiegel
For sparse regression, we achieve consistency for optimal sample size $n\gtrsim (k\log d)/\alpha^2$ and optimal error rate $O(\sqrt{(k\log d)/(n\cdot \alpha^2)})$ where $n$ is the number of observations, $d$ is the number of dimensions and $k$ is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples.
no code implementations • 5 Jan 2021 • David Steurer, Stefan Tiegel
We develop a general framework to significantly reduce the degree of sum-of-squares proofs by introducing new variables.
no code implementations • 12 Nov 2020 • Tommaso d'Orsi, Pravesh K. Kothari, Gleb Novikov, David Steurer
Despite a long history of prior works, including explicit studies of perturbation resilience, the best known algorithmic guarantees for Sparse PCA are fragile and break down under small adversarial perturbations.
no code implementations • 30 Sep 2020 • Tommaso d'Orsi, Gleb Novikov, David Steurer
Concretely, we show that the Huber loss estimator is consistent for every sample size $n= \omega(d/\alpha^2)$ and achieves an error rate of $O(d/\alpha^2n)^{1/2}$.
no code implementations • NeurIPS 2020 • Jingqiu Ding, Samuel B. Hopkins, David Steurer
For the case of Gaussian noise, the top eigenvector of the given matrix is a widely-studied estimator known to achieve optimal statistical guarantees, e. g., in the sense of the celebrated BBP phase transition.
no code implementations • 30 Jul 2018 • Prasad Raghavendra, Tselil Schramm, David Steurer
On one hand, there is a growing body of work utilizing sum-of-squares proofs for recovering solutions to polynomial systems when the system is feasible.
no code implementations • 30 Nov 2017 • Pravesh K. Kothari, David Steurer
We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers.
no code implementations • 30 Sep 2017 • Samuel B. Hopkins, David Steurer
in constant average degree graphs---up to what we conjecture to be the computational threshold for this model.
no code implementations • 27 Jun 2017 • Tselil Schramm, David Steurer
We develop fast spectral algorithms for tensor decomposition that match the robustness guarantees of the best known polynomial-time algorithms for this problem based on the sum-of-squares (SOS) semidefinite programming hierarchy.
no code implementations • 21 Feb 2017 • Aaron Potechin, David Steurer
We obtain the first polynomial-time algorithm for exact tensor completion that improves over the bound implied by reduction to matrix completion.
no code implementations • 6 Oct 2016 • Tengyu Ma, Jonathan Shi, David Steurer
We give new algorithms based on the sum-of-squares method for tensor decomposition.
no code implementations • 8 Dec 2015 • Samuel B. Hopkins, Tselil Schramm, Jonathan Shi, David Steurer
For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent $\approx 1. 086$) that approximately recovers a component of a random 3-tensor over $\mathbb R^n$ of rank up to $\tilde \Omega(n^{4/3})$.
no code implementations • 12 Jul 2015 • Samuel B. Hopkins, Jonathan Shi, David Steurer
We study a statistical model for the tensor principal component analysis problem introduced by Montanari and Richard: Given a order-$3$ tensor $T$ of the form $T = \tau \cdot v_0^{\otimes 3} + A$, where $\tau \geq 0$ is a signal-to-noise ratio, $v_0$ is a unit vector, and $A$ is a random noise tensor, the goal is to recover the planted vector $v_0$.
no code implementations • 6 Jul 2014 • Boaz Barak, Jonathan A. Kelner, David Steurer
We give a new approach to the dictionary learning (also known as "sparse coding") problem of recovering an unknown $n\times m$ matrix $A$ (for $m \geq n$) from examples of the form \[ y = Ax + e, \] where $x$ is a random vector in $\mathbb R^m$ with at most $\tau m$ nonzero coordinates, and $e$ is a random noise vector in $\mathbb R^n$ with bounded magnitude.
no code implementations • 21 Apr 2014 • Boaz Barak, David Steurer
Two recent developments, the Unique Games Conjecture (UGC) and the Sum-of-Squares (SOS) method, surprisingly suggest that this tailoring is not necessary and that a single efficient algorithm could achieve best possible guarantees for a wide range of different problems.
no code implementations • 23 Dec 2013 • Boaz Barak, Jonathan Kelner, David Steurer
Aside from being a natural relaxation, this is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012) and our results yield a certain improvement for that problem.