Search Results for author: Guy Bresler

Found 26 papers, 0 papers with code

The staircase property: How hierarchical structure can guide deep learning

no code implementations NeurIPS 2021 Emmanuel Abbe, Enric Boix-Adsera, Matthew Brennan, Guy Bresler, Dheeraj Nagaraj

This paper identifies a structural property of data distributions that enables deep neural networks to learn hierarchically.

Chow-Liu++: Optimal Prediction-Centric Learning of Tree Ising Models

no code implementations7 Jun 2021 Enric Boix-Adsera, Guy Bresler, Frederic Koehler

In this paper, we introduce a new algorithm that carefully combines elements of the Chow-Liu algorithm with tree metric reconstruction methods to efficiently and optimally learn tree Ising models under a prediction-centric loss.

The Algorithmic Phase Transition of Random $k$-SAT for Low Degree Polynomials

no code implementations3 Jun 2021 Guy Bresler, Brice Huang

We prove that the class of low degree polynomial algorithms cannot find a satisfying assignment at clause density $(1 + o_k(1)) \kappa^* 2^k \log k / k$ for a universal constant $\kappa^* \approx 4. 911$.

The EM Algorithm is Adaptively-Optimal for Unbalanced Symmetric Gaussian Mixtures

no code implementations29 Mar 2021 Nir Weinberger, Guy Bresler

For the empirical iteration based on $n$ samples, we show that when initialized at $\theta_{0}=0$, the EM algorithm adaptively achieves the minimax error rate $\tilde{O}\Big(\min\Big\{\frac{1}{(1-2\delta_{*})}\sqrt{\frac{d}{n}},\frac{1}{\|\theta_{*}\|}\sqrt{\frac{d}{n}},\left(\frac{d}{n}\right)^{1/4}\Big\}\Big)$ in no more than $O\Big(\frac{1}{\|\theta_{*}\|(1-2\delta_{*})}\Big)$ iterations (with high probability).

Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent

no code implementations13 Sep 2020 Matthew Brennan, Guy Bresler, Samuel B. Hopkins, Jerry Li, Tselil Schramm

Researchers currently use a number of approaches to predict and substantiate information-computation gaps in high-dimensional statistical estimation problems.

Two-sample testing

Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms

no code implementations NeurIPS 2020 Guy Bresler, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli, Xian Wu

Our improved rate serves as one of the first results where an algorithm outperforms SGD-DD on an interesting Markov chain and also provides one of the first theoretical analyses to support the use of experience replay in practice.

Learning Restricted Boltzmann Machines with Sparse Latent Variables

no code implementations NeurIPS 2020 Guy Bresler, Rares-Darius Buhai

In this paper, we give an algorithm for learning general RBMs with time complexity $\tilde{O}(n^{2^s+1})$, where $s$ is the maximum number of latent variables connected to the MRF neighborhood of an observed variable.

Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth

no code implementations NeurIPS 2020 Guy Bresler, Dheeraj Nagaraj

For each $D$, $\mathcal{G}_{D} \subseteq \mathcal{G}_{D+1}$ and as $D$ grows the class of functions $\mathcal{G}_{D}$ contains progressively less smooth functions.

Reducibility and Statistical-Computational Gaps from Secret Leakage

no code implementations16 May 2020 Matthew Brennan, Guy Bresler

Inference problems with conjectured statistical-computational gaps are ubiquitous throughout modern statistics, computer science and statistical physics.

A Corrective View of Neural Networks: Representation, Memorization and Learning

no code implementations1 Feb 2020 Guy Bresler, Dheeraj Nagaraj

This technique yields several new representation and learning results for neural networks.

Average-Case Lower Bounds for Learning Sparse Mixtures, Robust Estimation and Semirandom Adversaries

no code implementations8 Aug 2019 Matthew Brennan, Guy Bresler

This paper develops several average-case reduction techniques to show new hardness results for three central high-dimensional statistics problems, implying a statistical-computational gap induced by robustness, a detection-recovery gap and a universality principle for these gaps.

Optimal Average-Case Reductions to Sparse PCA: From Weak Assumptions to Strong Hardness

no code implementations20 Feb 2019 Matthew Brennan, Guy Bresler

We also show the surprising result that weaker forms of the PC conjecture up to clique size $K = o(N^\alpha)$ for any given $\alpha \in (0, 1/2]$ imply tight computational lower bounds for sparse PCA at sparsities $k = o(n^{\alpha/3})$.

Universality of Computational Lower Bounds for Submatrix Detection

no code implementations19 Feb 2019 Matthew Brennan, Guy Bresler, Wasim Huleihel

In the general submatrix detection problem, the task is to detect the presence of a small $k \times k$ submatrix with entries sampled from a distribution $\mathcal{P}$ in an $n \times n$ matrix of samples from $\mathcal{Q}$.

Community Detection Two-sample testing

Sparse PCA from Sparse Linear Regression

no code implementations NeurIPS 2018 Guy Bresler, Sung Min Park, Madalina Persu

Sparse Principal Component Analysis (SPCA) and Sparse Linear Regression (SLR) have a wide range of applications and have attracted a tremendous amount of attention in the last two decades as canonical examples of statistical problems in high dimension.

Learning Restricted Boltzmann Machines via Influence Maximization

no code implementations25 May 2018 Guy Bresler, Frederic Koehler, Ankur Moitra, Elchanan Mossel

This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs: the distribution on their observed variables can simulate any bounded order MRF.

Collaborative Filtering Dimensionality Reduction

Information Storage in the Stochastic Ising Model

no code implementations8 May 2018 Ziv Goldfeld, Guy Bresler, Yury Polyanskiy

We first show that at zero temperature, order of $\sqrt{n}$ bits can be stored in the system indefinitely by coding over stable, striped configurations.

Information Theory Statistical Mechanics Information Theory

Optimal Single Sample Tests for Structured versus Unstructured Network Data

no code implementations17 Feb 2018 Guy Bresler, Dheeraj Nagaraj

We develop a new approach that applies to both the Ising and Exponential Random Graph settings based on a general and natural statistical test.

Regret Bounds and Regimes of Optimality for User-User and Item-Item Collaborative Filtering

no code implementations6 Nov 2017 Guy Bresler, Mina Karzand

We assume that the matrix encoding the preferences of each user type for each item type is randomly generated; in this way, the model captures structure in both the item and user spaces, the amount of structure depending on the number of each of the types.

Collaborative Filtering Recommendation Systems

Learning a Tree-Structured Ising Model in Order to Make Predictions

no code implementations22 Apr 2016 Guy Bresler, Mina Karzand

We study the problem of learning a tree Ising model from samples such that subsequent predictions made using the model are accurate.

Regret Guarantees for Item-Item Collaborative Filtering

no code implementations20 Jul 2015 Guy Bresler, Devavrat Shah, Luis F. Voloch

There is much empirical evidence that item-item collaborative filtering works well in practice.

Collaborative Filtering Matrix Completion

Structure learning of antiferromagnetic Ising models

no code implementations NeurIPS 2014 Guy Bresler, David Gamarnik, Devavrat Shah

In this paper we investigate the computational complexity of learning the graph structure underlying a discrete undirected graphical model from i. i. d.

Efficiently learning Ising models on arbitrary graphs

no code implementations22 Nov 2014 Guy Bresler

In this paper we show that a simple greedy procedure allows to learn the structure of an Ising model on an arbitrary bounded-degree graph in time on the order of $p^2$.

A Latent Source Model for Online Collaborative Filtering

no code implementations NeurIPS 2014 Guy Bresler, George H. Chen, Devavrat Shah

Despite the prevalence of collaborative filtering in recommendation systems, there has been little theoretical development on why and how well it works, especially in the "online" setting, where items are recommended to users over time.

Collaborative Filtering Recommendation Systems

Learning graphical models from the Glauber dynamics

no code implementations28 Oct 2014 Guy Bresler, David Gamarnik, Devavrat Shah

In this paper we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics.

Hardness of parameter estimation in graphical models

no code implementations NeurIPS 2014 Guy Bresler, David Gamarnik, Devavrat Shah

Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters.

Cannot find the paper you are looking for? You can Submit a new open access paper.