no code implementations • 28 Nov 2022 • Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, Jonas Adler
Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement.
no code implementations • 8 Nov 2022 • Robin Strudel, Corentin Tallec, Florent Altché, Yilun Du, Yaroslav Ganin, Arthur Mensch, Will Grathwohl, Nikolay Savinov, Sander Dieleman, Laurent SIfre, Rémi Leblond
Can continuous diffusion models bring the same performance breakthrough on natural language they did for image generation?
A fundamental ability of an intelligent web-based agent is seeking out and acquiring new information.
To obtain an importance sampling estimate of the marginal likelihood, AIS introduces an extended target distribution to reweight the Markov chain proposal.
We find our models are capable of both accurate, calibrated predictions and high-quality conditional synthesis of novel attribute combinations.
We propose a general and scalable approximate sampling strategy for probabilistic models with discrete variables.
Energy-Based Models (EBMs) present a flexible and appealing way to represent uncertainty.
We estimate the Stein discrepancy between the data density $p(x)$ and the model density $q(x)$ defined by a vector function of the data.
In this setting, the standard class probabilities can be easily computed as well as unnormalized values of p(x) and p(x|y).
Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts.
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation.
Ranked #5 on Image Generation on MNIST
The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures.
Ranked #1 on Density Estimation on CIFAR-10 (NLL metric)
Gradient-based optimization is the foundation of deep learning and reinforcement learning.
In this paper we propose a probabilistic approach for learning separable representations of object identity and pose information using unsupervised video data.