no code implementations • 5 Oct 2022 • Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, Tim Salimans
We present Imagen Video, a text-conditional video generation system based on a cascade of video diffusion models.
Ranked #1 on Video Generation on LAION-400M
4 code implementations • 23 May 2022 • Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, Mohammad Norouzi
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding.
Ranked #17 on Text-to-Image Generation on MS COCO (using extra training data)
no code implementations • CVPR 2022 • Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia, Alexandros G. Dimakis, Peyman Milanfar
Unlike existing techniques, we train a stochastic sampler that refines the output of a deterministic predictor and is capable of producing a diverse set of plausible reconstructions for a given input.
no code implementations • 5 Jun 2021 • Jay Whang, Alliot Nagle, Anish Acharya, Hyeji Kim, Alexandros G. Dimakis
Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder.
no code implementations • 15 Dec 2020 • Nir Shlezinger, Jay Whang, Yonina C. Eldar, Alexandros G. Dimakis
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
no code implementations • 23 Oct 2020 • Jay Whang, Qi Lei, Alex Dimakis
We study image inverse problems with invertible generative priors, specifically normalizing flow models.
no code implementations • 28 Sep 2020 • Jay Whang, Erik Lindgren, Alex Dimakis
We study the problem of probabilistic inference on the joint distribution defined by a normalizing flow model.
no code implementations • 18 Mar 2020 • Jay Whang, Qi Lei, Alexandros G. Dimakis
We study image inverse problems with a normalizing flow prior.
no code implementations • 26 Feb 2020 • Jay Whang, Erik M. Lindgren, Alexandros G. Dimakis
We approach this problem as a task of conditional inference on the pre-trained unconditional flow model.
no code implementations • 27 Feb 2019 • Rui Shu, Hung H. Bui, Jay Whang, Stefano Ermon
The recognition network in deep latent variable models such as variational autoencoders (VAEs) relies on amortized inference for efficient posterior approximation that can scale up to large datasets.
no code implementations • 1 Jun 2018 • Ramtin Keramati, Jay Whang, Patrick Cho, Emma Brunskill
People seem to build simple models that are easy to learn to support planning and strategic exploration.