no code implementations • 8 Oct 2023 • Aaron Berk, Simone Brugiapaglia, Yaniv Plan, Matthew Scott, Xia Sheng, Ozgur Yilmaz
We study generative compressed sensing when the measurement matrix is randomly subsampled from a unitary matrix (with the DFT as an important special case).
no code implementations • 19 Jul 2022 • Aaron Berk, Simone Brugiapaglia, Babhru Joshi, Yaniv Plan, Matthew Scott, Özgür Yılmaz
In Bora et al. (2017), a mathematical framework was developed for compressed sensing guarantees in the setting where the measurement matrix is Gaussian and the signal structure is the range of a generative neural network (GNN).
no code implementations • NeurIPS 2021 • Babhru Joshi, Xiaowei Li, Yaniv Plan, Ozgur Yilmaz
We prove that, when weights are Gaussian and layer widths $n_i \gtrsim 5^i n_0$ (up to log factors), the algorithm converges geometrically to a neighbourhood of $x$ with high probability.
no code implementations • NeurIPS Workshop Deep_Invers 2021 • Alireza Naderi, Yaniv Plan
When the structure is given as a possibly non-convex cone $T \subset \mathbb{R}^{n}$, an approximate empirical risk minimizer is proven to be a robust estimator if the effective number of measurements is sufficient, even in the presence of a model mismatch.
no code implementations • NeurIPS Workshop Deep_Invers 2021 • Babhru Joshi, Xiaowei Li, Yaniv Plan, Ozgur Yilmaz
After a sufficient number of iterations, the estimation errors for both $x$ and $\mathcal{G}(x)$ are at most in the order of $\sqrt{4^dn_0/m} \|\epsilon\|$.
no code implementations • 23 Dec 2020 • Michael P. Friedlander, Halyun Jeong, Yaniv Plan, Ozgur Yilmaz
The Binary Iterative Hard Thresholding (BIHT) algorithm is a popular reconstruction method for one-bit compressed sensing due to its simplicity and fast empirical convergence.
Information Theory Numerical Analysis Information Theory Numerical Analysis 94-XX
no code implementations • 28 Jan 2020 • Halyun Jeong, Xiaowei Li, Yaniv Plan, Özgür Yılmaz
In many applications, e. g., compressed sensing, this norm may be large, or even growing with dimension, and thus it is important to characterize this dependence.
no code implementations • 13 Dec 2018 • Nicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, Sikander Randhawa
We prove that after $T$ steps of stochastic gradient descent, the error of the final iterate is $O(\log(T)/T)$ with high probability.
no code implementations • NeurIPS 2018 • Hassan Ashtiani, Shai Ben-David, Nicholas Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan
We prove that ϴ(k d^2 / ε^2) samples are necessary and sufficient for learning a mixture of k Gaussians in R^d, up to error ε in total variation distance.
1 code implementation • 31 Mar 2018 • Navid Ghadermarzy, Yaniv Plan, Ozgur Yilmaz
In this paper we generalize the 1-bit matrix completion problem to higher order tensors.
Statistics Theory Information Theory Information Theory Optimization and Control Statistics Theory 62B10, 94A17, 15A69, 62D05 H.3.3; I.2.6
1 code implementation • 14 Nov 2017 • Navid Ghadermarzy, Yaniv Plan, Özgür Yılmaz
In this paper, we show that by using an atomic-norm whose atoms are rank-$1$ sign tensors, one can obtain a sample complexity of $O(dN)$.
no code implementations • 14 Oct 2017 • Hassan Ashtiani, Shai Ben-David, Nick Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan
We prove that $\tilde{\Theta}(k d^2 / \varepsilon^2)$ samples are necessary and sufficient for learning a mixture of $k$ Gaussians in $\mathbb{R}^d$, up to error $\varepsilon$ in total variation distance.
no code implementations • NeurIPS 2016 • Tengyao Wang, Quentin Berthet, Yaniv Plan
The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models.