Search Results for author: Aparna Gupte

Found 4 papers, 0 papers with code

Sparse Linear Regression and Lattice Problems

no code implementations22 Feb 2024 Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan

Furthermore, for well-conditioned (essentially) isotropic Gaussian design matrices, where Lasso is known to behave well in the identifiable regime, we show hardness of outputting any good solution in the unidentifiable regime where there are many solutions, assuming the worst-case hardness of standard and well-studied lattice problems.

regression

Characterizing the Implicit Bias of Regularized SGD in Rank Minimization

no code implementations12 Jun 2022 Tomer Galanti, Zachary S. Siegel, Aparna Gupte, Tomaso Poggio

We study the bias of Stochastic Gradient Descent (SGD) to learn low-rank weight matrices when training deep neural networks.

Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures

no code implementations6 Apr 2022 Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan

Under the (conservative) polynomial hardness of LWE, we show hardness of density estimation for $n^{\epsilon}$ Gaussians for any constant $\epsilon > 0$, which improves on Bruna, Regev, Song and Tang (STOC 2021), who show hardness for at least $\sqrt{n}$ Gaussians under polynomial (quantum) hardness assumptions.

Density Estimation

The Fine-Grained Hardness of Sparse Linear Regression

no code implementations6 Jun 2021 Aparna Gupte, Vinod Vaikuntanathan

Sparse linear regression is the well-studied inference problem where one is given a design matrix $\mathbf{A} \in \mathbb{R}^{M\times N}$ and a response vector $\mathbf{b} \in \mathbb{R}^M$, and the goal is to find a solution $\mathbf{x} \in \mathbb{R}^{N}$ which is $k$-sparse (that is, it has at most $k$ non-zero coordinates) and minimizes the prediction error $\|\mathbf{A} \mathbf{x} - \mathbf{b}\|_2$.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.