no code implementations • 14 Nov 2022 • Tommaso d'Orsi, Rajai Nasser, Gleb Novikov, David Steurer
Using a reduction from the planted clique problem, we provide evidence that the quasipolynomial time is likely to be necessary for sparse PCA with symmetric noise.
no code implementations • 24 Jan 2022 • Rajai Nasser, Stefan Tiegel
Further, this continues to hold even if the information-theoretically optimal error $\mathrm{OPT}$ is as small as $\exp\left(-\log^c(d)\right)$, where $d$ is the dimension and $0 < c < 1$ is an arbitrary absolute constant, and an overwhelming fraction of examples are noiseless.
no code implementations • 16 Nov 2021 • Jingqiu Ding, Tommaso d'Orsi, Rajai Nasser, David Steurer
We develop an efficient algorithm for weak recovery in a robust version of the stochastic block model.
no code implementations • NeurIPS 2021 • Tommaso d'Orsi, Chih-Hung Liu, Rajai Nasser, Gleb Novikov, David Steurer, Stefan Tiegel
For sparse regression, we achieve consistency for optimal sample size $n\gtrsim (k\log d)/\alpha^2$ and optimal error rate $O(\sqrt{(k\log d)/(n\cdot \alpha^2)})$ where $n$ is the number of observations, $d$ is the number of dimensions and $k$ is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples.