Generative Models for Low-Dimensional Video Representation and Compressive Sensing

Generative priors have become highly effective in solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements. With a generative model we can represent an image with a much lower dimensional latent codes. In the context of compressive sensing, if the unknown image belongs to the range of a pretrained generative network, then we can recover the image by estimating the underlying compact latent code from the available measurements. However, recent studies revealed that even untrained deep neural networks can work as a prior for recovering natural images. These approaches update the network weights keeping latent codes fixed to reconstruct the target image from the given measurements. In this paper, we optimize over network weights and latent codes to use untrained generative network as prior for video compressive sensing problem. We show that by optimizing over latent code, we can additionally get concise representation of the frames which retain the structural similarity of the video frames. We also apply low-rank constraint on the latent codes to represent the video sequences in even lower dimensional latent space. We empirically show that our proposed methods provide better or comparable accuracy and low computational complexity compared to the existing methods.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here