no code implementations • 25 Feb 2024 • Stefan Tiegel
On the other hand, lower bounds based on well-established assumptions (such as approximating worst-case lattice problems or variants of Feige's 3SAT hypothesis) are only known (or are implied by existing results) for the intersection of super-logarithmically many halfspaces [KS09, KS06, DSS16].
no code implementations • 21 Feb 2024 • Rares-Darius Buhai, Jingqiu Ding, Stefan Tiegel
In particular, we show that an improper learning algorithm for sparse linear regression can be used to solve sparse PCA problems (with a negative spike) in their Wishart form, in regimes in which efficient algorithms are widely believed to require at least $\Omega(k^2)$ samples.
no code implementations • 28 Jul 2022 • Stefan Tiegel
We show hardness of improperly learning halfspaces in the agnostic model, both in the distribution-independent as well as the distribution-specific setting, based on the assumption that worst-case lattice problems, such as GapSVP or SIVP, are hard.
no code implementations • 14 Feb 2022 • Jingqiu Ding, Tommaso d'Orsi, Chih-Hung Liu, Stefan Tiegel, David Steurer
We develop the first fast spectral algorithm to decompose a random third-order tensor over $\mathbb{R}^d$ of rank up to $O(d^{3/2}/\text{polylog}(d))$.
no code implementations • 24 Jan 2022 • Rajai Nasser, Stefan Tiegel
Further, this continues to hold even if the information-theoretically optimal error $\mathrm{OPT}$ is as small as $\exp\left(-\log^c(d)\right)$, where $d$ is the dimension and $0 < c < 1$ is an arbitrary absolute constant, and an overwhelming fraction of examples are noiseless.
no code implementations • NeurIPS 2021 • Tommaso d'Orsi, Chih-Hung Liu, Rajai Nasser, Gleb Novikov, David Steurer, Stefan Tiegel
For sparse regression, we achieve consistency for optimal sample size $n\gtrsim (k\log d)/\alpha^2$ and optimal error rate $O(\sqrt{(k\log d)/(n\cdot \alpha^2)})$ where $n$ is the number of observations, $d$ is the number of dimensions and $k$ is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples.
no code implementations • 5 Jan 2021 • David Steurer, Stefan Tiegel
We develop a general framework to significantly reduce the degree of sum-of-squares proofs by introducing new variables.
no code implementations • 5 Apr 2018 • Dariusz Dereniowski, Stefan Tiegel, Przemysław Uznański, Daniel Wolleb-Graf
We then show that our algorithm coupled with Chernoff bound argument leads to an algorithm for independent noise that is simpler and with a query complexity that is both simpler and asymptotically better to one of Emamjomeh-Zadeh et al. [STOC 2016].