Search Results for author: Tim Salimans

Found 38 papers, 22 papers with code

Video Diffusion Models

no code implementations7 Apr 2022 Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, David J. Fleet

Generating temporally coherent high fidelity video is an important milestone in generative modeling research.

Video Generation

Progressive Distillation for Fast Sampling of Diffusion Models

2 code implementations ICLR 2022 Tim Salimans, Jonathan Ho

Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps.

Density Estimation Image Generation

On Density Estimation with Diffusion Models

no code implementations NeurIPS 2021 Diederik Kingma, Tim Salimans, Ben Poole, Jonathan Ho

In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints.

Density Estimation

Autoregressive Diffusion Models

2 code implementations ICLR 2022 Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, Tim Salimans

We introduce Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models (Uria et al., 2014) and absorbing discrete diffusion (Austin et al., 2021), which we show are special cases of ARDMs under mild assumptions.

Ranked #5 on Image Generation on CIFAR-10 (bits/dimension metric)

Image Generation

Unconditional Diffusion Guidance

no code implementations29 Sep 2021 Jonathan Ho, Tim Salimans

Classifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models.

Variational Diffusion Models

1 code implementation1 Jul 2021 Diederik P. Kingma, Tim Salimans, Ben Poole, Jonathan Ho

In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints.

 Ranked #1 on Image Generation on CIFAR-10 (bits/dimension metric)

Density Estimation Image Generation

Cascaded Diffusion Models for High Fidelity Image Generation

no code implementations30 May 2021 Jonathan Ho, Chitwan Saharia, William Chan, David J. Fleet, Mohammad Norouzi, Tim Salimans

We show that cascaded diffusion models are capable of generating high fidelity images on the class-conditional ImageNet generation benchmark, without any assistance from auxiliary image classifiers to boost sample quality.

Data Augmentation Image Generation +1

Agent-Centric Representations for Multi-Agent Reinforcement Learning

no code implementations19 Apr 2021 Wenling Shang, Lasse Espeholt, Anton Raichuk, Tim Salimans

Empirically, agent-centric representation learning leads to the emergence of more complex cooperation strategies between agents as well as enhanced sample efficiency and generalization.

Multi-agent Reinforcement Learning reinforcement-learning +2

Should EBMs model the energy or the score?

no code implementations ICLR Workshop EBM 2021 Tim Salimans, Jonathan Ho

Recent progress in training unnormalized models through denoising score matching with Langevin dynamics (SMLD) and denoising diffusion probabilistic modeling (DDPM) has made unnormalized models a competitive model class for generative modeling.

Denoising

A Spectral Energy Distance for Parallel Speech Synthesis

2 code implementations NeurIPS 2020 Alexey A. Gritsenko, Tim Salimans, Rianne van den Berg, Jasper Snoek, Nal Kalchbrenner

Speech synthesis is an important practical generative modeling problem that has seen great progress over the last few years, with likelihood-based autoregressive neural models now outperforming traditional concatenative systems.

Speech Synthesis

IDF++: Analyzing and Improving Integer Discrete Flows for Lossless Compression

no code implementations ICLR 2021 Rianne van den Berg, Alexey A. Gritsenko, Mostafa Dehghani, Casper Kaae Sønderby, Tim Salimans

Furthermore, we zoom in on the effect of gradient bias due to the straight-through estimator in integer discrete flows, and demonstrate that its influence is highly dependent on architecture choices and less prominent than previously thought.

Quantization

Milking CowMask for Semi-Supervised Image Classification

2 code implementations26 Mar 2020 Geoff French, Avital Oliver, Tim Salimans

Using it to provide perturbations for semi-supervised consistency regularization, we achieve a state-of-the-art result on ImageNet with 10% labeled data, with a top-5 error of 8. 76% and top-1 error of 26. 06%.

Classification General Classification +1

How Good is the Bayes Posterior in Deep Neural Networks Really?

1 code implementation ICML 2020 Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin

In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the posterior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD.

Bayesian Inference

Axial Attention in Multidimensional Transformers

1 code implementation20 Dec 2019 Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, Tim Salimans

We propose Axial Transformers, a self-attention-based autoregressive model for images and other data organized as high dimensional tensors.

Ranked #26 on Image Generation on ImageNet 64x64 (Bits per dim metric)

Image Generation

The Likelihood of Mixed Hitting Times

1 code implementation9 May 2019 Jaap H. Abbring, Tim Salimans

We present a method for computing the likelihood of a mixed hitting-time model that specifies durations as the first time a latent L\'evy process crosses a heterogeneous threshold.

Learning Montezuma's Revenge from a Single Demonstration

no code implementations8 Dec 2018 Tim Salimans, Richard Chen

We propose a new method for learning from a single demonstration to solve hard exploration tasks like the Atari game Montezuma's Revenge.

Montezuma's Revenge reinforcement-learning

Improving Language Understanding by Generative Pre-Training

5 code implementations Preprint 2018 Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever

We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task.

Cloze Test Document Classification +6

Improving GANs Using Optimal Transport

2 code implementations ICLR 2018 Tim Salimans, Han Zhang, Alec Radford, Dimitris Metaxas

We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution.

Image Generation

Evolution Strategies as a Scalable Alternative to Reinforcement Learning

17 code implementations10 Mar 2017 Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever

We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients.

Atari Games Q-Learning +1

PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications

7 code implementations19 Jan 2017 Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma

1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training.

Image Generation

Improved Variational Inference with Inverse Autoregressive Flow

2 code implementations NeurIPS 2016 Durk P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling

The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables.

Ranked #29 on Image Generation on CIFAR-10 (bits/dimension metric)

Image Generation Variational Inference

Variational Lossy Autoencoder

no code implementations8 Nov 2016 Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, Pieter Abbeel

Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification.

Density Estimation Image Generation +1

Improving Variational Inference with Inverse Autoregressive Flow

8 code implementations15 Jun 2016 Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling

The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables.

Variational Inference

A Structured Variational Auto-encoder for Learning Deep Hierarchies of Sparse Features

no code implementations28 Feb 2016 Tim Salimans

To learn the parameters of the new model, we approximate the posterior of the latent variables with a variational auto-encoder.

Variational Inference

Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks

9 code implementations NeurIPS 2016 Tim Salimans, Diederik P. Kingma

We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction.

Image Classification reinforcement-learning

Variational Dropout and the Local Reparameterization Trick

10 code implementations NeurIPS 2015 Diederik P. Kingma, Tim Salimans, Max Welling

Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models.

Bayesian Inference

Markov Chain Monte Carlo and Variational Inference: Bridging the Gap

no code implementations23 Oct 2014 Tim Salimans, Diederik P. Kingma, Max Welling

Recent advances in stochastic gradient variational inference have made it possible to perform variational Bayesian inference with posterior approximations containing auxiliary random variables.

Bayesian Inference Variational Inference

Fixed-Form Variational Posterior Approximation through Stochastic Linear Regression

2 code implementations28 Jun 2012 Tim Salimans, David A. Knowles

We propose a general algorithm for approximating nonstandard Bayesian posterior distributions.

Cannot find the paper you are looking for? You can Submit a new open access paper.