Search Results for author: Ilias Zadik

Found 16 papers, 1 papers with code

Transfer Learning Beyond Bounded Density Ratios

no code implementations18 Mar 2024 Alkis Kalavasis, Ilias Zadik, Manolis Zampetakis

We also provide a discrete analogue of our transfer inequality on the Boolean Hypercube $\{-1, 1\}^n$, and study its connections with the recent problem of Generalization on the Unseen of Abbe, Bengio, Lotfi and Rizk (ICML, 2023).

In-Context Learning Out-of-Distribution Generalization +1

Statistical and Computational Phase Transitions in Group Testing

no code implementations15 Jun 2022 Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, Alexander S. Wein, Ilias Zadik

For the Bernoulli design, we determine the precise number of tests required to solve the associated detection problem (where the goal is to distinguish between a group testing instance and pure noise), improving both the upper and lower bounds of Truong, Aldridge, and Scarlett (2020).

The Franz-Parisi Criterion and Computational Trade-offs in High Dimensional Statistics

no code implementations19 May 2022 Afonso S. Bandeira, Ahmed El Alaoui, Samuel B. Hopkins, Tselil Schramm, Alexander S. Wein, Ilias Zadik

We define a free-energy based criterion for hardness and formally connect it to the well-established notion of low-degree hardness for a broad class of statistical problems, namely all Gaussian additive models and certain models with a sparse planted signal.

Additive models

Lattice-Based Methods Surpass Sum-of-Squares in Clustering

no code implementations7 Dec 2021 Ilias Zadik, Min Jae Song, Alexander S. Wein, Joan Bruna

Prior work on many similar inference tasks portends that such lower bounds strongly suggest the presence of an inherent statistical-to-computational gap for clustering, that is, a parameter regime where the clustering task is statistically possible but no polynomial-time algorithm succeeds.

Clustering

On the Cryptographic Hardness of Learning Single Periodic Neurons

no code implementations NeurIPS 2021 Min Jae Song, Ilias Zadik, Joan Bruna

More precisely, our reduction shows that any polynomial-time algorithm (not necessarily gradient-based) for learning such functions under small noise implies a polynomial-time quantum algorithm for solving worst-case lattice problems, whose hardness form the foundation of lattice-based cryptography.

Retrieval

Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks

no code implementations2 Mar 2021 David Gamarnik, Eren C. Kızıldağ, Ilias Zadik

Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm.

Generalization Bounds

It was "all" for "nothing": sharp phase transitions for noiseless discrete channels

no code implementations24 Feb 2021 Jonathan Niles-Weed, Ilias Zadik

We establish a phase transition known as the "all-or-nothing" phenomenon for noiseless discrete channels.

Statistics Theory Information Theory Information Theory Probability Statistics Theory

Free Energy Wells and Overlap Gap Property in Sparse PCA

no code implementations18 Jun 2020 Gérard Ben Arous, Alexander S. Wein, Ilias Zadik

We study a variant of the sparse PCA (principal component analysis) problem in the "hard" regime, where the inference task is possible yet no polynomial-time algorithm is known to exist.

Neural Networks and Polynomial Regression. Demystifying the Overparametrization Phenomena

no code implementations23 Mar 2020 Matt Emschwiller, David Gamarnik, Eren C. Kızıldağ, Ilias Zadik

Thus a message implied by our results is that parametrizing wide neural networks by the number of hidden nodes is misleading, and a more fitting measure of parametrization complexity is the number of regression coefficients associated with tensorized data.

Generalization Bounds regression

Stationary Points of Shallow Neural Networks with Quadratic Activation Function

no code implementations3 Dec 2019 David Gamarnik, Eren C. Kızıldağ, Ilias Zadik

Next, we show that initializing below this barrier is in fact easily achieved when the weights are randomly generated under relatively weak assumptions.

Inference in High-Dimensional Linear Regression via Lattice Basis Reduction and Integer Relation Detection

no code implementations24 Oct 2019 David Gamarnik, Eren C. Kızıldağ, Ilias Zadik

Using a novel combination of the PSLQ integer relation detection, and LLL lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-support assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement $(n=1)$.

regression Relation +1

The Landscape of the Planted Clique Problem: Dense subgraphs and the Overlap Gap Property

no code implementations15 Apr 2019 David Gamarnik, Ilias Zadik

Using the first moment method, we study the densest subgraph problems for subgraphs with fixed, but arbitrary, overlap size with the planted clique, and provide evidence of a phase transition for the presence of Overlap Gap Property (OGP) at $k=\Theta\left(\sqrt{n}\right)$.

Learning Theory

High Dimensional Linear Regression using Lattice Basis Reduction

no code implementations NeurIPS 2018 David Gamarnik, Ilias Zadik

We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector $\beta^*$ from $n$ noisy linear observations $Y=X\beta^*+W \in \mathbb{R}^n$, for known $X \in \mathbb{R}^{n \times p}$ and unknown $W \in \mathbb{R}^n$.

regression Vocal Bursts Intensity Prediction

Sparse High-Dimensional Linear Regression. Algorithmic Barriers and a Local Search Algorithm

no code implementations14 Nov 2017 David Gamarnik, Ilias Zadik

The presence of such an Overlap Gap Property phase transition, which originates in statistical physics, is known to provide evidence of an algorithmic hardness.

Denoising regression

Orthogonal Machine Learning: Power and Limitations

1 code implementation ICML 2018 Lester Mackey, Vasilis Syrgkanis, Ilias Zadik

Double machine learning provides $\sqrt{n}$-consistent estimates of parameters of interest even when high-dimensional or nonparametric nuisance parameters are estimated at an $n^{-1/4}$ rate.

2k BIG-bench Machine Learning +2

High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition

no code implementations16 Jan 2017 David Gamarnik, Ilias Zadik

c) We establish a certain Overlap Gap Property (OGP) on the space of all binary vectors \beta when n\le ck\log p for sufficiently small constant c. We conjecture that OGP is the source of algorithmic hardness of solving the minimization problem \min_{\beta}\|Y-X\beta\|_{2} in the regime n<n_{\text{LASSO/CS}}.

2k regression

Cannot find the paper you are looking for? You can Submit a new open access paper.