no code implementations • 18 Mar 2024 • Alkis Kalavasis, Ilias Zadik, Manolis Zampetakis
We also provide a discrete analogue of our transfer inequality on the Boolean Hypercube $\{-1, 1\}^n$, and study its connections with the recent problem of Generalization on the Unseen of Abbe, Bengio, Lotfi and Rizk (ICML, 2023).
no code implementations • 15 Jun 2022 • Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, Alexander S. Wein, Ilias Zadik
For the Bernoulli design, we determine the precise number of tests required to solve the associated detection problem (where the goal is to distinguish between a group testing instance and pure noise), improving both the upper and lower bounds of Truong, Aldridge, and Scarlett (2020).
no code implementations • 19 May 2022 • Afonso S. Bandeira, Ahmed El Alaoui, Samuel B. Hopkins, Tselil Schramm, Alexander S. Wein, Ilias Zadik
We define a free-energy based criterion for hardness and formally connect it to the well-established notion of low-degree hardness for a broad class of statistical problems, namely all Gaussian additive models and certain models with a sparse planted signal.
no code implementations • 7 Dec 2021 • Ilias Zadik, Min Jae Song, Alexander S. Wein, Joan Bruna
Prior work on many similar inference tasks portends that such lower bounds strongly suggest the presence of an inherent statistical-to-computational gap for clustering, that is, a parameter regime where the clustering task is statistically possible but no polynomial-time algorithm succeeds.
no code implementations • NeurIPS 2021 • Min Jae Song, Ilias Zadik, Joan Bruna
More precisely, our reduction shows that any polynomial-time algorithm (not necessarily gradient-based) for learning such functions under small noise implies a polynomial-time quantum algorithm for solving worst-case lattice problems, whose hardness form the foundation of lattice-based cryptography.
no code implementations • 2 Mar 2021 • David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm.
no code implementations • 24 Feb 2021 • Jonathan Niles-Weed, Ilias Zadik
We establish a phase transition known as the "all-or-nothing" phenomenon for noiseless discrete channels.
Statistics Theory Information Theory Information Theory Probability Statistics Theory
no code implementations • 18 Jun 2020 • Gérard Ben Arous, Alexander S. Wein, Ilias Zadik
We study a variant of the sparse PCA (principal component analysis) problem in the "hard" regime, where the inference task is possible yet no polynomial-time algorithm is known to exist.
no code implementations • 23 Mar 2020 • Matt Emschwiller, David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Thus a message implied by our results is that parametrizing wide neural networks by the number of hidden nodes is misleading, and a more fitting measure of parametrization complexity is the number of regression coefficients associated with tensorized data.
no code implementations • 3 Dec 2019 • David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Next, we show that initializing below this barrier is in fact easily achieved when the weights are randomly generated under relatively weak assumptions.
no code implementations • 24 Oct 2019 • David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Using a novel combination of the PSLQ integer relation detection, and LLL lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-support assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement $(n=1)$.
no code implementations • 15 Apr 2019 • David Gamarnik, Ilias Zadik
Using the first moment method, we study the densest subgraph problems for subgraphs with fixed, but arbitrary, overlap size with the planted clique, and provide evidence of a phase transition for the presence of Overlap Gap Property (OGP) at $k=\Theta\left(\sqrt{n}\right)$.
no code implementations • NeurIPS 2018 • David Gamarnik, Ilias Zadik
We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector $\beta^*$ from $n$ noisy linear observations $Y=X\beta^*+W \in \mathbb{R}^n$, for known $X \in \mathbb{R}^{n \times p}$ and unknown $W \in \mathbb{R}^n$.
no code implementations • 14 Nov 2017 • David Gamarnik, Ilias Zadik
The presence of such an Overlap Gap Property phase transition, which originates in statistical physics, is known to provide evidence of an algorithmic hardness.
1 code implementation • ICML 2018 • Lester Mackey, Vasilis Syrgkanis, Ilias Zadik
Double machine learning provides $\sqrt{n}$-consistent estimates of parameters of interest even when high-dimensional or nonparametric nuisance parameters are estimated at an $n^{-1/4}$ rate.
no code implementations • 16 Jan 2017 • David Gamarnik, Ilias Zadik
c) We establish a certain Overlap Gap Property (OGP) on the space of all binary vectors \beta when n\le ck\log p for sufficiently small constant c. We conjecture that OGP is the source of algorithmic hardness of solving the minimization problem \min_{\beta}\|Y-X\beta\|_{2} in the regime n<n_{\text{LASSO/CS}}.