Search Results for author: Colin Sandon

Found 10 papers, 0 papers with code

On the Power of Differentiable Learning versus PAC and SQ Learning

no code implementations NeurIPS 2021 Emmanuel Abbe, Pritish Kamath, Eran Malach, Colin Sandon, Nathan Srebro

With fine enough precision relative to minibatch size, namely when $b \rho$ is small enough, SGD can go beyond SQ learning and simulate any sample-based learning algorithm and thus its learning power is equivalent to that of PAC learning; this extends prior work that achieved this result for $b=1$.

PAC learning

Spoofing Generalization: When Can't You Trust Proprietary Models?

no code implementations15 Jun 2021 Ankur Moitra, Elchanan Mossel, Colin Sandon

In this work, we study the computational complexity of determining whether a machine learning model that perfectly fits the training data will generalizes to unseen data.

Learning to Sample from Censored Markov Random Fields

no code implementations15 Jan 2021 Ankur Moitra, Elchanan Mossel, Colin Sandon

These are Markov Random Fields where some of the nodes are censored (not observed).

On the universality of deep learning

no code implementations NeurIPS 2020 Emmanuel Abbe, Colin Sandon

This paper shows that deep learning, i. e., neural networks trained by SGD, can learn in polytime any function class that can be learned in polytime by some algorithm, including parities.

Poly-time universality and limitations of deep learning

no code implementations7 Jan 2020 Emmanuel Abbe, Colin Sandon

Therefore deep learning provides a universal learning paradigm: it was known that the approximation and estimation errors could be controlled with poly-size neural nets, using ERM that is NP-hard; this new result shows that the optimization error can also be controlled with SGD in poly-time.

Provable limitations of deep learning

no code implementations16 Dec 2018 Emmanuel Abbe, Colin Sandon

As the success of deep learning reaches more grounds, one would like to also envision the potential limits of deep learning.

Community Detection

Achieving the KS threshold in the general stochastic block model with linearized acyclic belief propagation

no code implementations NeurIPS 2016 Emmanuel Abbe, Colin Sandon

The stochastic block model (SBM) has long been studied in machine learning and network science as a canonical model for clustering and community detection.

Clustering Community Detection +1

Detection in the stochastic block model with multiple clusters: proof of the achievability conjectures, acyclic BP, and the information-computation gap

no code implementations30 Dec 2015 Emmanuel Abbe, Colin Sandon

In a paper that initiated the modern study of the stochastic block model, Decelle et al., backed by Mossel et al., made the following conjecture: Denote by $k$ the number of balanced communities, $a/n$ the probability of connecting inside communities and $b/n$ across, and set $\mathrm{SNR}=(a-b)^2/(k(a+(k-1)b)$; for any $k \geq 2$, it is possible to detect communities efficiently whenever $\mathrm{SNR}>1$ (the KS threshold), whereas for $k\geq 4$, it is possible to detect communities information-theoretically for some $\mathrm{SNR}<1$.

Clustering Stochastic Block Model

Recovering communities in the general stochastic block model without knowing the parameters

no code implementations NeurIPS 2015 Emmanuel Abbe, Colin Sandon

Most recent developments on the stochastic block model (SBM) rely on the knowledge of the model parameters, or at least on the number of communities.

Stochastic Block Model

Cannot find the paper you are looking for? You can Submit a new open access paper.