no code implementations • 21 Feb 2024 • Rares-Darius Buhai, Jingqiu Ding, Stefan Tiegel
In particular, we show that an improper learning algorithm for sparse linear regression can be used to solve sparse PCA problems (with a negative spike) in their Wishart form, in regimes in which efficient algorithms are widely believed to require at least $\Omega(k^2)$ samples.
no code implementations • 10 Dec 2021 • Rares-Darius Buhai, David Steurer
For the special case of colinear means, our algorithm outputs a $k$-clustering of the input sample that is approximately consistent with the components of the mixture.
no code implementations • NeurIPS 2020 • Guy Bresler, Rares-Darius Buhai
In this paper, we give an algorithm for learning general RBMs with time complexity $\tilde{O}(n^{2^s+1})$, where $s$ is the maximum number of latent variables connected to the MRF neighborhood of an observed variable.
no code implementations • 25 Sep 2019 • Rares-Darius Buhai, Andrej Risteski, Yoni Halpern, David Sontag
One of the most surprising and exciting discoveries in supervising learning was the benefit of overparameterization (i. e. training a very large model) to improving the optimization landscape of a problem, with minimal effect on statistical performance (i. e. generalization).
1 code implementation • ICML 2020 • Rares-Darius Buhai, Yoni Halpern, Yoon Kim, Andrej Risteski, David Sontag
One of the most surprising and exciting discoveries in supervised learning was the benefit of overparameterization (i. e. training a very large model) to improving the optimization landscape of a problem, with minimal effect on statistical performance (i. e. generalization).