no code implementations • 2 Apr 2024 • Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan
Recent work of Klivans, Stavropoulos, and Vasilyan initiated the study of testable learning with distribution shift (TDS learning), where a learner is given labeled samples from training distribution $\mathcal{D}$, unlabeled samples from test distribution $\mathcal{D}'$, and the goal is to output a classifier with low error on $\mathcal{D}'$ whenever the training samples pass a corresponding test.
no code implementations • 25 Nov 2023 • Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan
In this model, a learner outputs a classifier with low test error whenever samples from $D$ and $D'$ pass an associated test; moreover, the test must accept if the marginal of $D$ equals the marginal of $D'$.
1 code implementation • NeurIPS 2023 • Jeffrey Ouyang-Zhang, Daniel J. Diaz, Adam R. Klivans, Philipp Krähenbühl
We build Mutate Everything on top of ESM2 and AlphaFold, neither of which were trained to predict thermodynamic stability.
no code implementations • 28 Feb 2023 • Aravind Gollakota, Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan
Prior work on testable learning ignores the labels in the training set and checks that the empirical moments of the covariates are close to the moments of the base distribution.
no code implementations • 23 Nov 2022 • Aravind Gollakota, Adam R. Klivans, Pravesh K. Kothari
A remarkable recent paper by Rubinfeld and Vasilyan (2022) initiated the study of \emph{testable learning}, where the goal is to replace hard-to-verify distributional assumptions (such as Gaussianity) with efficiently testable ones and to require that the learner succeed whenever the unknown distribution passes the corresponding test.
no code implementations • 10 Feb 2022 • Sitan Chen, Aravind Gollakota, Adam R. Klivans, Raghu Meka
We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model.
no code implementations • 28 Sep 2020 • Sitan Chen, Adam R. Klivans, Raghu Meka
These results provably cannot be obtained using gradient-based methods and give the first example of a class of efficiently learnable neural networks that gradient descent will fail to learn.
no code implementations • 26 May 2020 • Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam R. Klivans, Mahdi Soltanolkotabi
We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution.
no code implementations • NeurIPS 2019 • Sushrut Karmalkar, Adam R. Klivans, Pravesh K. Kothari
To complement our result, we prove that the anti-concentration assumption on the inliers is information-theoretically necessary.
no code implementations • 13 Feb 2019 • Surbhi Goel, Daniel M. Kane, Adam R. Klivans
We give the first efficient algorithm for learning the structure of an Ising model that tolerates independent failures; that is, each entry of the observed sample is missing with some unknown probability p. Our algorithm matches the essentially optimal runtime and sample complexity bounds of recent work for learning Ising models due to Klivans and Meka (2017).