Search Results for author: Raghu Meka

Found 19 papers, 1 papers with code

Guaranteed Rank Minimization via Singular Value Projection

1 code implementation NeurIPS 2010 Raghu Meka, Prateek Jain, Inderjit S. Dhillon

Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics.

Low-Rank Matrix Completion

Matrix Completion from Power-Law Distributed Samples

no code implementations NeurIPS 2009 Raghu Meka, Prateek Jain, Inderjit S. Dhillon

In this paper, we propose a graph theoretic approach to matrix completion that solves the problem for more realistic sampling models.

Low-Rank Matrix Completion

Computational Limits for Matrix Completion

no code implementations10 Feb 2014 Moritz Hardt, Raghu Meka, Prasad Raghavendra, Benjamin Weitz

Matrix Completion is the problem of recovering an unknown real-valued low-rank matrix from a subsample of its entries.

Matrix Completion

Learning Graphical Models Using Multiplicative Weights

no code implementations20 Jun 2017 Adam Klivans, Raghu Meka

Our main application is an algorithm for learning the structure of t-wise MRFs with nearly-optimal sample complexity (up to polynomial losses in necessary terms that depend on the weights) and running time that is $n^{O(t)}$.

Learning One Convolutional Layer with Overlapping Patches

no code implementations ICML 2018 Surbhi Goel, Adam Klivans, Raghu Meka

We give the first provably efficient algorithm for learning a one hidden layer convolutional network with respect to a general class of (potentially overlapping) patches.

Efficient Algorithms for Outlier-Robust Regression

no code implementations8 Mar 2018 Adam Klivans, Pravesh K. Kothari, Raghu Meka

We give the first polynomial-time algorithm for performing linear or polynomial regression resilient to adversarial corruptions in both examples and labels.

regression

Learning Some Popular Gaussian Graphical Models without Condition Number Bounds

no code implementations NeurIPS 2020 Jonathan Kelner, Frederic Koehler, Raghu Meka, Ankur Moitra

While there are a variety of algorithms (e. g. Graphical Lasso, CLIME) that provably recover the graph structure with a logarithmic number of samples, they assume various conditions that require the precision matrix to be in some sense well-conditioned.

Learning Polynomials of Few Relevant Dimensions

no code implementations28 Apr 2020 Sitan Chen, Raghu Meka

We give an algorithm that learns the polynomial within accuracy $\epsilon$ with sample complexity that is roughly $N = O_{r, d}(n \log^2(1/\epsilon) (\log n)^d)$ and runtime $O_{r, d}(N n^2)$.

Retrieval

Learning Deep ReLU Networks Is Fixed-Parameter Tractable

no code implementations28 Sep 2020 Sitan Chen, Adam R. Klivans, Raghu Meka

These results provably cannot be obtained using gradient-based methods and give the first example of a class of efficiently learnable neural networks that gradient descent will fail to learn.

On the Power of Preconditioning in Sparse Linear Regression

no code implementations17 Jun 2021 Jonathan Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi

First, we show that the preconditioned Lasso can solve a large class of sparse linear regression problems nearly optimally: it succeeds whenever the dependency structure of the covariates, in the sense of the Markov property, has low treewidth -- even if $\Sigma$ is highly ill-conditioned.

regression

Efficiently Learning Any One Hidden Layer ReLU Network From Queries

no code implementations8 Nov 2021 Sitan Chen, Adam R Klivans, Raghu Meka

In this work we give the first polynomial-time algorithm for learning arbitrary one hidden layer neural networks activations provided black-box access to the network.

Model extraction

Efficiently Learning One Hidden Layer ReLU Networks From Queries

no code implementations NeurIPS 2021 Sitan Chen, Adam Klivans, Raghu Meka

While the problem of PAC learning neural networks from samples has received considerable attention in recent years, in certain settings like model extraction attacks, it is reasonable to imagine having more than just the ability to observe random labeled examples.

Model extraction PAC learning

Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANs

no code implementations ICLR 2022 Sitan Chen, Jerry Li, Yuanzhi Li, Raghu Meka

Arguably the most fundamental question in the theory of generative adversarial networks (GANs) is to understand to what extent GANs can actually learn the underlying distribution.

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

no code implementations10 Feb 2022 Sitan Chen, Aravind Gollakota, Adam R. Klivans, Raghu Meka

We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model.

PAC learning Vocal Bursts Valence Prediction

Learning Narrow One-Hidden-Layer ReLU Networks

no code implementations20 Apr 2023 Sitan Chen, Zehao Dou, Surbhi Goel, Adam R Klivans, Raghu Meka

We consider the well-studied problem of learning a linear combination of $k$ ReLU activations with respect to a Gaussian distribution on inputs in $d$ dimensions.

On User-Level Private Convex Optimization

no code implementations8 May 2023 Badih Ghazi, Pritish Kamath, Ravi Kumar, Raghu Meka, Pasin Manurangsi, Chiyuan Zhang

We introduce a new mechanism for stochastic convex optimization (SCO) with user-level differential privacy guarantees.

Simple Mechanisms for Representing, Indexing and Manipulating Concepts

no code implementations18 Oct 2023 Yuanzhi Li, Raghu Meka, Rina Panigrahy, Kulin Shah

Deep networks typically learn concepts via classifiers, which involves setting up a model and training it via gradient descent to fit the concept-labeled data.

Lasso with Latents: Efficient Estimation, Covariate Rescaling, and Computational-Statistical Gaps

no code implementations23 Feb 2024 Jonathan Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi

It is well-known that the statistical performance of Lasso can suffer significantly when the covariates of interest have strong correlations.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.