On the Power of Compressed Sensing with Generative Models

ICML 2020  ·  Akshay Kamath, Eric Price, Sushrut Karmalkar ·

The goal of compressed sensing is to learn a structured signal $x$ from a limited number of noisy linear measurements $y \approx Ax$. In traditional compressed sensing, ``structure'' is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with Bora et.al has instead considered structure to come from a generative model $G: \R^k \to \R^n$. In this paper, we prove results that (i)establish the difficulty of this task and show that existing bounds are tight and (ii) demonstrate that the latter task is a generalization of the former. First, we provide a lower bound matching the upper bound of Bora et.al. for compressed sensing from $L$-Lipschitz generative models $G$. In particular, there exists such a function that requires roughly $\Omega(k \log L)$ linear measurements for sparse recovery to be possible. This holds even for the more relaxed goal of \emph{nonuniform} recovery. Second, we show that generative models generalize sparsity as a representation of structure. In particular, we construct a ReLU-based neural network $G: \R^{k} \to \R^n$ with $O(1)$ layers and $O(n)$ activations per layer, such that the range of $G$ contains all $k$-sparse vectors.

PDF
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here