Search Results for author: David Gamarnik

Found 18 papers, 0 papers with code

Barriers for the performance of graph neural networks (GNN) in discrete random structures. A comment on~\cite{schuetz2022combinatorial},\cite{angelini2023modern},\cite{schuetz2023reply}

no code implementations5 Jun 2023 David Gamarnik

In particular, critical commentaries~\cite{angelini2023modern} and~\cite{boettcher2023inability} point out that simple greedy algorithm performs better than GNN in the setting of random graphs, and in fact stronger algorithmic performance can be reached with more sophisticated methods.

Combinatorial Optimization

Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks

no code implementations2 Mar 2021 David Gamarnik, Eren C. Kızıldağ, Ilias Zadik

Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm.

Generalization Bounds

Hardness of Random Optimization Problems for Boolean Circuits, Low-Degree Polynomials, and Langevin Dynamics

no code implementations25 Apr 2020 David Gamarnik, Aukosh Jagannath, Alexander S. Wein

For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory (although we consider the search problem as opposed to the decision problem).

Neural Networks and Polynomial Regression. Demystifying the Overparametrization Phenomena

no code implementations23 Mar 2020 Matt Emschwiller, David Gamarnik, Eren C. Kızıldağ, Ilias Zadik

Thus a message implied by our results is that parametrizing wide neural networks by the number of hidden nodes is misleading, and a more fitting measure of parametrization complexity is the number of regression coefficients associated with tensorized data.

Generalization Bounds regression

Stationary Points of Shallow Neural Networks with Quadratic Activation Function

no code implementations3 Dec 2019 David Gamarnik, Eren C. Kızıldağ, Ilias Zadik

Next, we show that initializing below this barrier is in fact easily achieved when the weights are randomly generated under relatively weak assumptions.

Sparse High-Dimensional Isotonic Regression

no code implementations NeurIPS 2019 David Gamarnik, Julia Gaudio

We consider the problem of estimating an unknown coordinate-wise monotone function given noisy measurements, known as the isotonic regression problem.

regression Vocal Bursts Intensity Prediction

Inference in High-Dimensional Linear Regression via Lattice Basis Reduction and Integer Relation Detection

no code implementations24 Oct 2019 David Gamarnik, Eren C. Kızıldağ, Ilias Zadik

Using a novel combination of the PSLQ integer relation detection, and LLL lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-support assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement $(n=1)$.

regression Relation +1

The Landscape of the Planted Clique Problem: Dense subgraphs and the Overlap Gap Property

no code implementations15 Apr 2019 David Gamarnik, Ilias Zadik

Using the first moment method, we study the densest subgraph problems for subgraphs with fixed, but arbitrary, overlap size with the planted clique, and provide evidence of a phase transition for the presence of Overlap Gap Property (OGP) at $k=\Theta\left(\sqrt{n}\right)$.

Learning Theory

High Dimensional Linear Regression using Lattice Basis Reduction

no code implementations NeurIPS 2018 David Gamarnik, Ilias Zadik

We consider a high dimensional linear regression problem where the goal is to efficiently recover an unknown vector $\beta^*$ from $n$ noisy linear observations $Y=X\beta^*+W \in \mathbb{R}^n$, for known $X \in \mathbb{R}^{n \times p}$ and unknown $W \in \mathbb{R}^n$.

regression Vocal Bursts Intensity Prediction

Sparse High-Dimensional Linear Regression. Algorithmic Barriers and a Local Search Algorithm

no code implementations14 Nov 2017 David Gamarnik, Ilias Zadik

The presence of such an Overlap Gap Property phase transition, which originates in statistical physics, is known to provide evidence of an algorithmic hardness.

Denoising regression

Matrix Completion from $O(n)$ Samples in Linear Time

no code implementations8 Feb 2017 David Gamarnik, Quan Li, Hongyi Zhang

Under a certain incoherence assumption on $M$ and for the case when both the rank and the condition number of $M$ are bounded, it was shown in \cite{CandesRecht2009, CandesTao2010, keshavan2010, Recht2011, Jain2012, Hardt2014} that $M$ can be recovered exactly or approximately (depending on some trade-off between accuracy and computational complexity) using $O(n \, \text{poly}(\log n))$ samples in super-linear time $O(n^{a} \, \text{poly}(\log n))$ for some constant $a \geq 1$.

Matrix Completion

High-Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transition

no code implementations16 Jan 2017 David Gamarnik, Ilias Zadik

c) We establish a certain Overlap Gap Property (OGP) on the space of all binary vectors \beta when n\le ck\log p for sufficiently small constant c. We conjecture that OGP is the source of algorithmic hardness of solving the minimization problem \min_{\beta}\|Y-X\beta\|_{2} in the regime n<n_{\text{LASSO/CS}}.

2k regression

A Message Passing Algorithm for the Problem of Path Packing in Graphs

no code implementations18 Mar 2016 Patrick Eschenfeldt, David Gamarnik

We consider the problem of packing node-disjoint directed paths in a directed graph.

A Note on Alternating Minimization Algorithm for the Matrix Completion Problem

no code implementations5 Feb 2016 David Gamarnik, Sidhant Misra

We consider the problem of reconstructing a low rank matrix from a subset of its entries and analyze two variants of the so-called Alternating Minimization algorithm, which has been proposed in the past.

Matrix Completion

Structure learning of antiferromagnetic Ising models

no code implementations NeurIPS 2014 Guy Bresler, David Gamarnik, Devavrat Shah

In this paper we investigate the computational complexity of learning the graph structure underlying a discrete undirected graphical model from i. i. d.

Learning graphical models from the Glauber dynamics

no code implementations28 Oct 2014 Guy Bresler, David Gamarnik, Devavrat Shah

In this paper we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics.

Hardness of parameter estimation in graphical models

no code implementations NeurIPS 2014 Guy Bresler, David Gamarnik, Devavrat Shah

Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters.

Performance of the Survey Propagation-guided decimation algorithm for the random NAE-K-SAT problem

no code implementations1 Feb 2014 David Gamarnik, Madhu Sudan

We show that the Survey Propagation-guided decimation algorithm fails to find satisfying assignments on random instances of the "Not-All-Equal-$K$-SAT" problem if the number of message passing iterations is bounded by a constant independent of the size of the instance and the clause-to-variable ratio is above $(1+o_K(1)){2^{K-1}\over K}\log^2 K$ for sufficiently large $K$.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.