Search Results for author: Yaniv Plan

Found 13 papers, 2 papers with code

Model-adapted Fourier sampling for generative compressed sensing

no code implementations8 Oct 2023 Aaron Berk, Simone Brugiapaglia, Yaniv Plan, Matthew Scott, Xia Sheng, Ozgur Yilmaz

We study generative compressed sensing when the measurement matrix is randomly subsampled from a unitary matrix (with the DFT as an important special case).

A coherence parameter characterizing generative compressed sensing with Fourier measurements

no code implementations19 Jul 2022 Aaron Berk, Simone Brugiapaglia, Babhru Joshi, Yaniv Plan, Matthew Scott, Özgür Yılmaz

In Bora et al. (2017), a mathematical framework was developed for compressed sensing guarantees in the setting where the measurement matrix is Gaussian and the signal structure is the range of a generative neural network (GNN).

PLUGIn: A simple algorithm for inverting generative models with recovery guarantees

no code implementations NeurIPS 2021 Babhru Joshi, Xiaowei Li, Yaniv Plan, Ozgur Yilmaz

We prove that, when weights are Gaussian and layer widths $n_i \gtrsim 5^i n_0$ (up to log factors), the algorithm converges geometrically to a neighbourhood of $x$ with high probability.

Beyond Independent Measurements: General Compressed Sensing with GNN Application

no code implementations NeurIPS Workshop Deep_Invers 2021 Alireza Naderi, Yaniv Plan

When the structure is given as a possibly non-convex cone $T \subset \mathbb{R}^{n}$, an approximate empirical risk minimizer is proven to be a robust estimator if the effective number of measurements is sufficient, even in the presence of a model mismatch.

PLUGIn-CS: A simple algorithm for compressive sensing with generative prior

no code implementations NeurIPS Workshop Deep_Invers 2021 Babhru Joshi, Xiaowei Li, Yaniv Plan, Ozgur Yilmaz

After a sufficient number of iterations, the estimation errors for both $x$ and $\mathcal{G}(x)$ are at most in the order of $\sqrt{4^dn_0/m} \|\epsilon\|$.

Compressive Sensing

NBIHT: An Efficient Algorithm for 1-bit Compressed Sensing with Optimal Error Decay Rate

no code implementations23 Dec 2020 Michael P. Friedlander, Halyun Jeong, Yaniv Plan, Ozgur Yilmaz

The Binary Iterative Hard Thresholding (BIHT) algorithm is a popular reconstruction method for one-bit compressed sensing due to its simplicity and fast empirical convergence.

Information Theory Numerical Analysis Information Theory Numerical Analysis 94-XX

Sub-Gaussian Matrices on Sets: Optimal Tail Dependence and Applications

no code implementations28 Jan 2020 Halyun Jeong, Xiaowei Li, Yaniv Plan, Özgür Yılmaz

In many applications, e. g., compressed sensing, this norm may be large, or even growing with dimension, and thus it is important to characterize this dependence.

Tight Analyses for Non-Smooth Stochastic Gradient Descent

no code implementations13 Dec 2018 Nicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, Sikander Randhawa

We prove that after $T$ steps of stochastic gradient descent, the error of the final iterate is $O(\log(T)/T)$ with high probability.

Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes

no code implementations NeurIPS 2018 Hassan Ashtiani, Shai Ben-David, Nicholas Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan

We prove that ϴ(k d^2 / ε^2) samples are necessary and sufficient for learning a mixture of k Gaussians in R^d, up to error ε in total variation distance.

Learning tensors from partial binary measurements

1 code implementation31 Mar 2018 Navid Ghadermarzy, Yaniv Plan, Ozgur Yilmaz

In this paper we generalize the 1-bit matrix completion problem to higher order tensors.

Statistics Theory Information Theory Information Theory Optimization and Control Statistics Theory 62B10, 94A17, 15A69, 62D05 H.3.3; I.2.6

Near-optimal sample complexity for convex tensor completion

1 code implementation14 Nov 2017 Navid Ghadermarzy, Yaniv Plan, Özgür Yılmaz

In this paper, we show that by using an atomic-norm whose atoms are rank-$1$ sign tensors, one can obtain a sample complexity of $O(dN)$.

Near-optimal Sample Complexity Bounds for Robust Learning of Gaussians Mixtures via Compression Schemes

no code implementations14 Oct 2017 Hassan Ashtiani, Shai Ben-David, Nick Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan

We prove that $\tilde{\Theta}(k d^2 / \varepsilon^2)$ samples are necessary and sufficient for learning a mixture of $k$ Gaussians in $\mathbb{R}^d$, up to error $\varepsilon$ in total variation distance.

Average-case Hardness of RIP Certification

no code implementations NeurIPS 2016 Tengyao Wang, Quentin Berthet, Yaniv Plan

The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models.

Cannot find the paper you are looking for? You can Submit a new open access paper.