Search Results for author: Vinod Vaikuntanathan

Found 8 papers, 2 papers with code

Sparse Linear Regression and Lattice Problems

no code implementations22 Feb 2024 Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan

Furthermore, for well-conditioned (essentially) isotropic Gaussian design matrices, where Lasso is known to behave well in the identifiable regime, we show hardness of outputting any good solution in the unidentifiable regime where there are many solutions, assuming the worst-case hardness of standard and well-studied lattice problems.

regression

PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels

no code implementations31 Mar 2023 Homa Esfahanizadeh, Adam Yala, Rafael G. L. D'Oliveira, Andrea J. D. Jaba, Victor Quach, Ken R. Duffy, Tommi S. Jaakkola, Vinod Vaikuntanathan, Manya Ghobadi, Regina Barzilay, Muriel Médard

Allowing organizations to share their data for training of machine learning (ML) models without unintended information leakage is an open problem in practice.

Planting Undetectable Backdoors in Machine Learning Models

no code implementations14 Apr 2022 Shafi Goldwasser, Michael P. Kim, Vinod Vaikuntanathan, Or Zamir

Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks.

Adversarial Robustness BIG-bench Machine Learning

Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures

no code implementations6 Apr 2022 Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan

Under the (conservative) polynomial hardness of LWE, we show hardness of density estimation for $n^{\epsilon}$ Gaussians for any constant $\epsilon > 0$, which improves on Bruna, Regev, Song and Tang (STOC 2021), who show hardness for at least $\sqrt{n}$ Gaussians under polynomial (quantum) hardness assumptions.

Density Estimation

The Fine-Grained Hardness of Sparse Linear Regression

no code implementations6 Jun 2021 Aparna Gupte, Vinod Vaikuntanathan

Sparse linear regression is the well-studied inference problem where one is given a design matrix $\mathbf{A} \in \mathbb{R}^{M\times N}$ and a response vector $\mathbf{b} \in \mathbb{R}^M$, and the goal is to find a solution $\mathbf{x} \in \mathbb{R}^{N}$ which is $k$-sparse (that is, it has at most $k$ non-zero coordinates) and minimizes the prediction error $\|\mathbf{A} \mathbf{x} - \mathbf{b}\|_2$.

regression

Computational Limitations in Robust Classification and Win-Win Results

no code implementations4 Feb 2019 Akshay Degwekar, Preetum Nakkiran, Vinod Vaikuntanathan

We continue the study of statistical/computational tradeoffs in learning robust classifiers, following the recent work of Bubeck, Lee, Price and Razenshteyn who showed examples of classification tasks where (a) an efficient robust classifier exists, in the small-perturbation regime; (b) a non-robust classifier can be learned efficiently; but (c) it is computationally hard to learn a robust classifier, assuming the hardness of factoring large numbers.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.