Search Results for author: Adam R. Klivans

Found 10 papers, 1 papers with code

Learning Intersections of Halfspaces with Distribution Shift: Improved Algorithms and SQ Lower Bounds

no code implementations2 Apr 2024 Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan

Recent work of Klivans, Stavropoulos, and Vasilyan initiated the study of testable learning with distribution shift (TDS learning), where a learner is given labeled samples from training distribution $\mathcal{D}$, unlabeled samples from test distribution $\mathcal{D}'$, and the goal is to output a classifier with low error on $\mathcal{D}'$ whenever the training samples pass a corresponding test.

Dimensionality Reduction Domain Adaptation

Testable Learning with Distribution Shift

no code implementations25 Nov 2023 Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan

In this model, a learner outputs a classifier with low test error whenever samples from $D$ and $D'$ pass an associated test; moreover, the test must accept if the marginal of $D$ equals the marginal of $D'$.

Active Learning

Predicting a Protein's Stability under a Million Mutations

1 code implementation NeurIPS 2023 Jeffrey Ouyang-Zhang, Daniel J. Diaz, Adam R. Klivans, Philipp Krähenbühl

We build Mutate Everything on top of ESM2 and AlphaFold, neither of which were trained to predict thermodynamic stability.

An Efficient Tester-Learner for Halfspaces

no code implementations28 Feb 2023 Aravind Gollakota, Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan

Prior work on testable learning ignores the labels in the training set and checks that the empirical moments of the covariates are close to the moments of the base distribution.

A Moment-Matching Approach to Testable Learning and a New Characterization of Rademacher Complexity

no code implementations23 Nov 2022 Aravind Gollakota, Adam R. Klivans, Pravesh K. Kothari

A remarkable recent paper by Rubinfeld and Vasilyan (2022) initiated the study of \emph{testable learning}, where the goal is to replace hard-to-verify distributional assumptions (such as Gaussianity) with efficiently testable ones and to require that the learner succeed whenever the unknown distribution passes the corresponding test.

Learning Theory

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

no code implementations10 Feb 2022 Sitan Chen, Aravind Gollakota, Adam R. Klivans, Raghu Meka

We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model.

PAC learning Vocal Bursts Valence Prediction

Learning Deep ReLU Networks Is Fixed-Parameter Tractable

no code implementations28 Sep 2020 Sitan Chen, Adam R. Klivans, Raghu Meka

These results provably cannot be obtained using gradient-based methods and give the first example of a class of efficiently learnable neural networks that gradient descent will fail to learn.

Approximation Schemes for ReLU Regression

no code implementations26 May 2020 Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam R. Klivans, Mahdi Soltanolkotabi

We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution.

regression

List-Decodable Linear Regression

no code implementations NeurIPS 2019 Sushrut Karmalkar, Adam R. Klivans, Pravesh K. Kothari

To complement our result, we prove that the anti-concentration assumption on the inliers is information-theoretically necessary.

regression

Learning Ising Models with Independent Failures

no code implementations13 Feb 2019 Surbhi Goel, Daniel M. Kane, Adam R. Klivans

We give the first efficient algorithm for learning the structure of an Ising model that tolerates independent failures; that is, each entry of the observed sample is missing with some unknown probability p. Our algorithm matches the essentially optimal runtime and sample complexity bounds of recent work for learning Ising models due to Klivans and Meka (2017).

Cannot find the paper you are looking for? You can Submit a new open access paper.