Search Results for author: Jay Whang

Found 11 papers, 1 papers with code

Deblurring via Stochastic Refinement

no code implementations CVPR 2022 Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia, Alexandros G. Dimakis, Peyman Milanfar

Unlike existing techniques, we train a stochastic sampler that refines the output of a deterministic predictor and is capable of producing a diverse set of plausible reconstructions for a given input.

Deblurring Image Deblurring

Neural Distributed Source Coding

no code implementations5 Jun 2021 Jay Whang, Alliot Nagle, Anish Acharya, Hyeji Kim, Alexandros G. Dimakis

Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder.

Model-Based Deep Learning

no code implementations15 Dec 2020 Nir Shlezinger, Jay Whang, Yonina C. Eldar, Alexandros G. Dimakis

We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.

Compressed Sensing with Invertible Generative Models and Dependent Noise

no code implementations23 Oct 2020 Jay Whang, Qi Lei, Alex Dimakis

We study image inverse problems with invertible generative priors, specifically normalizing flow models.

Denoising

Approximate Probabilistic Inference with Composed Flows

no code implementations28 Sep 2020 Jay Whang, Erik Lindgren, Alex Dimakis

We study the problem of probabilistic inference on the joint distribution defined by a normalizing flow model.

Variational Inference

Composing Normalizing Flows for Inverse Problems

no code implementations26 Feb 2020 Jay Whang, Erik M. Lindgren, Alexandros G. Dimakis

We approach this problem as a task of conditional inference on the pre-trained unconditional flow model.

Compressive Sensing Uncertainty Quantification +1

Training Variational Autoencoders with Buffered Stochastic Variational Inference

no code implementations27 Feb 2019 Rui Shu, Hung H. Bui, Jay Whang, Stefano Ermon

The recognition network in deep latent variable models such as variational autoencoders (VAEs) relies on amortized inference for efficient posterior approximation that can scale up to large datasets.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.