Search Results for author: Rares-Darius Buhai

Found 5 papers, 1 papers with code

Computational-Statistical Gaps for Improper Learning in Sparse Linear Regression

no code implementations21 Feb 2024 Rares-Darius Buhai, Jingqiu Ding, Stefan Tiegel

In particular, we show that an improper learning algorithm for sparse linear regression can be used to solve sparse PCA problems (with a negative spike) in their Wishart form, in regimes in which efficient algorithms are widely believed to require at least $\Omega(k^2)$ samples.

regression

Beyond Parallel Pancakes: Quasi-Polynomial Time Guarantees for Non-Spherical Gaussian Mixtures

no code implementations10 Dec 2021 Rares-Darius Buhai, David Steurer

For the special case of colinear means, our algorithm outputs a $k$-clustering of the input sample that is approximately consistent with the components of the mixture.

Clustering

Learning Restricted Boltzmann Machines with Sparse Latent Variables

no code implementations NeurIPS 2020 Guy Bresler, Rares-Darius Buhai

In this paper, we give an algorithm for learning general RBMs with time complexity $\tilde{O}(n^{2^s+1})$, where $s$ is the maximum number of latent variables connected to the MRF neighborhood of an observed variable.

Benefits of Overparameterization in Single-Layer Latent Variable Generative Models

no code implementations25 Sep 2019 Rares-Darius Buhai, Andrej Risteski, Yoni Halpern, David Sontag

One of the most surprising and exciting discoveries in supervising learning was the benefit of overparameterization (i. e. training a very large model) to improving the optimization landscape of a problem, with minimal effect on statistical performance (i. e. generalization).

Variational Inference

Empirical Study of the Benefits of Overparameterization in Learning Latent Variable Models

1 code implementation ICML 2020 Rares-Darius Buhai, Yoni Halpern, Yoon Kim, Andrej Risteski, David Sontag

One of the most surprising and exciting discoveries in supervised learning was the benefit of overparameterization (i. e. training a very large model) to improving the optimization landscape of a problem, with minimal effect on statistical performance (i. e. generalization).

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.