Search Results for author: Stefan Tiegel

Found 8 papers, 0 papers with code

Improved Hardness Results for Learning Intersections of Halfspaces

no code implementations25 Feb 2024 Stefan Tiegel

On the other hand, lower bounds based on well-established assumptions (such as approximating worst-case lattice problems or variants of Feige's 3SAT hypothesis) are only known (or are implied by existing results) for the intersection of super-logarithmically many halfspaces [KS09, KS06, DSS16].

Computational-Statistical Gaps for Improper Learning in Sparse Linear Regression

no code implementations21 Feb 2024 Rares-Darius Buhai, Jingqiu Ding, Stefan Tiegel

In particular, we show that an improper learning algorithm for sparse linear regression can be used to solve sparse PCA problems (with a negative spike) in their Wishart form, in regimes in which efficient algorithms are widely believed to require at least $\Omega(k^2)$ samples.

regression

Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems

no code implementations28 Jul 2022 Stefan Tiegel

We show hardness of improperly learning halfspaces in the agnostic model, both in the distribution-independent as well as the distribution-specific setting, based on the assumption that worst-case lattice problems, such as GapSVP or SIVP, are hard.

Fast algorithm for overcomplete order-3 tensor decomposition

no code implementations14 Feb 2022 Jingqiu Ding, Tommaso d'Orsi, Chih-Hung Liu, Stefan Tiegel, David Steurer

We develop the first fast spectral algorithm to decompose a random third-order tensor over $\mathbb{R}^d$ of rank up to $O(d^{3/2}/\text{polylog}(d))$.

Tensor Decomposition Tensor Networks

Optimal SQ Lower Bounds for Learning Halfspaces with Massart Noise

no code implementations24 Jan 2022 Rajai Nasser, Stefan Tiegel

Further, this continues to hold even if the information-theoretically optimal error $\mathrm{OPT}$ is as small as $\exp\left(-\log^c(d)\right)$, where $d$ is the dimension and $0 < c < 1$ is an arbitrary absolute constant, and an overwhelming fraction of examples are noiseless.

Consistent Estimation for PCA and Sparse Regression with Oblivious Outliers

no code implementations NeurIPS 2021 Tommaso d'Orsi, Chih-Hung Liu, Rajai Nasser, Gleb Novikov, David Steurer, Stefan Tiegel

For sparse regression, we achieve consistency for optimal sample size $n\gtrsim (k\log d)/\alpha^2$ and optimal error rate $O(\sqrt{(k\log d)/(n\cdot \alpha^2)})$ where $n$ is the number of observations, $d$ is the number of dimensions and $k$ is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples.

Matrix Completion regression

SoS Degree Reduction with Applications to Clustering and Robust Moment Estimation

no code implementations5 Jan 2021 David Steurer, Stefan Tiegel

We develop a general framework to significantly reduce the degree of sum-of-squares proofs by introducing new variables.

Clustering

A Framework for Searching in Graphs in the Presence of Errors

no code implementations5 Apr 2018 Dariusz Dereniowski, Stefan Tiegel, Przemysław Uznański, Daniel Wolleb-Graf

We then show that our algorithm coupled with Chernoff bound argument leads to an algorithm for independent noise that is simpler and with a query complexity that is both simpler and asymptotically better to one of Emamjomeh-Zadeh et al. [STOC 2016].

Cannot find the paper you are looking for? You can Submit a new open access paper.