Search Results for author: Aravind Gollakota

Found 8 papers, 1 papers with code

Ambient Diffusion: Learning Clean Distributions from Corrupted Data

1 code implementation NeurIPS 2023 Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans

We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples.

An Efficient Tester-Learner for Halfspaces

no code implementations28 Feb 2023 Aravind Gollakota, Adam R. Klivans, Konstantinos Stavropoulos, Arsen Vasilyan

Prior work on testable learning ignores the labels in the training set and checks that the empirical moments of the covariates are close to the moments of the base distribution.

A Moment-Matching Approach to Testable Learning and a New Characterization of Rademacher Complexity

no code implementations23 Nov 2022 Aravind Gollakota, Adam R. Klivans, Pravesh K. Kothari

A remarkable recent paper by Rubinfeld and Vasilyan (2022) initiated the study of \emph{testable learning}, where the goal is to replace hard-to-verify distributional assumptions (such as Gaussianity) with efficiently testable ones and to require that the learner succeed whenever the unknown distribution passes the corresponding test.

Learning Theory

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

no code implementations10 Feb 2022 Sitan Chen, Aravind Gollakota, Adam R. Klivans, Raghu Meka

We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model.

PAC learning Vocal Bursts Valence Prediction

On the Hardness of PAC-learning Stabilizer States with Noise

no code implementations9 Feb 2021 Aravind Gollakota, Daniel Liang

Our results position the problem of learning stabilizer states as a natural quantum analogue of the classical problem of learning parities: easy in the noiseless setting, but seemingly intractable even with simple forms of noise.

Learning Theory PAC learning

The Polynomial Method is Universal for Distribution-Free Correlational SQ Learning

no code implementations22 Oct 2020 Aravind Gollakota, Sushrut Karmalkar, Adam Klivans

Generalizing a beautiful work of Malach and Shalev-Shwartz (2022) that gave tight correlational SQ (CSQ) lower bounds for learning DNF formulas, we give new proofs that lower bounds on the threshold or approximate degree of any function class directly imply CSQ lower bounds for PAC or agnostic learning respectively.

Statistical-Query Lower Bounds via Functional Gradients

no code implementations NeurIPS 2020 Surbhi Goel, Aravind Gollakota, Adam Klivans

We give the first statistical-query lower bounds for agnostically learning any non-polynomial activation with respect to Gaussian marginals (e. g., ReLU, sigmoid, sign).

Cannot find the paper you are looking for? You can Submit a new open access paper.