Search Results for author: Jonathan Ho

Found 31 papers, 19 papers with code

On Distillation of Guided Diffusion Models

2 code implementations CVPR 2023 Chenlin Meng, Robin Rombach, Ruiqi Gao, Diederik P. Kingma, Stefano Ermon, Jonathan Ho, Tim Salimans

For standard diffusion models trained on the pixel-space, our approach is able to generate images visually comparable to that of the original model using as few as 4 sampling steps on ImageNet 64x64 and CIFAR-10, achieving FID/IS scores comparable to that of the original model while being up to 256 times faster to sample from.

Denoising Image Generation +1

Novel View Synthesis with Diffusion Models

no code implementations6 Oct 2022 Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, Mohammad Norouzi

We demonstrate that stochastic conditioning significantly improves the 3D consistency of a naive sampler for an image-to-image diffusion model, which involves conditioning on a single fixed view.

Denoising Novel View Synthesis

Classifier-Free Diffusion Guidance

10 code implementations26 Jul 2022 Jonathan Ho, Tim Salimans

Classifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models.

Diversity

Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality

no code implementations11 Feb 2022 Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi

We introduce Differentiable Diffusion Sampler Search (DDSS): a method that optimizes fast samplers for any pre-trained diffusion model by differentiating through sample quality scores.

Image Generation Unconditional Image Generation

Progressive Distillation for Fast Sampling of Diffusion Models

12 code implementations ICLR 2022 Tim Salimans, Jonathan Ho

Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps.

Density Estimation Image Generation

On Density Estimation with Diffusion Models

1 code implementation NeurIPS 2021 Diederik Kingma, Tim Salimans, Ben Poole, Jonathan Ho

In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints.

Density Estimation

Palette: Image-to-Image Diffusion Models

5 code implementations10 Nov 2021 Chitwan Saharia, William Chan, Huiwen Chang, Chris A. Lee, Jonathan Ho, Tim Salimans, David J. Fleet, Mohammad Norouzi

We expect this standardized evaluation protocol to play a role in advancing image-to-image translation research.

Colorization Denoising +6

Optimizing Few-Step Diffusion Samplers by Gradient Descent

no code implementations ICLR 2022 Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi

We propose Generalized Gaussian Diffusion Processes (GGDP), a family of non-Markovian samplers for diffusion models, and we show how to improve the generated samples of pre-trained DDPMs by optimizing the degrees of freedom of the GGDP sampler family with respect to a perceptual loss.

Denoising Image Generation +1

Unconditional Diffusion Guidance

no code implementations29 Sep 2021 Jonathan Ho, Tim Salimans

Classifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models.

Diversity

Structured Denoising Diffusion Models in Discrete State-Spaces

4 code implementations NeurIPS 2021 Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, Rianne van den Berg

Here, we introduce Discrete Denoising Diffusion Probabilistic Models (D3PMs), diffusion-like generative models for discrete data that generalize the multinomial diffusion model of Hoogeboom et al. 2021, by going beyond corruption processes with uniform transition probabilities.

Denoising Text Generation

Variational Diffusion Models

4 code implementations1 Jul 2021 Diederik P. Kingma, Tim Salimans, Ben Poole, Jonathan Ho

In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints.

Density Estimation Image Generation

Learning to Efficiently Sample from Diffusion Probabilistic Models

no code implementations7 Jun 2021 Daniel Watson, Jonathan Ho, Mohammad Norouzi, William Chan

Key advantages of DDPMs include ease of training, in contrast to generative adversarial networks, and speed of generation, in contrast to autoregressive models.

Denoising Speech Synthesis

Cascaded Diffusion Models for High Fidelity Image Generation

no code implementations30 May 2021 Jonathan Ho, Chitwan Saharia, William Chan, David J. Fleet, Mohammad Norouzi, Tim Salimans

We show that cascaded diffusion models are capable of generating high fidelity images on the class-conditional ImageNet generation benchmark, without any assistance from auxiliary image classifiers to boost sample quality.

Data Augmentation Image Generation +2

Importance weighted compression

no code implementations ICLR Workshop Neural_Compression 2021 Lucas Theis, Jonathan Ho

The connection between variational autoencoders (VAEs) and compression is well established and they have been used for both lossless and lossy compression.

Should EBMs model the energy or the score?

no code implementations ICLR Workshop EBM 2021 Tim Salimans, Jonathan Ho

Recent progress in training unnormalized models through denoising score matching with Langevin dynamics (SMLD) and denoising diffusion probabilistic modeling (DDPM) has made unnormalized models a competitive model class for generative modeling.

Denoising

Denoising Diffusion Probabilistic Models

66 code implementations NeurIPS 2020 Jonathan Ho, Ajay Jain, Pieter Abbeel

We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics.

Denoising Density Estimation +1

Axial Attention in Multidimensional Transformers

2 code implementations20 Dec 2019 Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, Tim Salimans

We propose Axial Transformers, a self-attention-based autoregressive model for images and other data organized as high dimensional tensors.

Ranked #29 on Image Generation on ImageNet 64x64 (Bits per dim metric)

Image Generation

Natural Image Manipulation for Autoregressive Models Using Fisher Scores

no code implementations25 Nov 2019 Wilson Yan, Jonathan Ho, Pieter Abbeel

Deep autoregressive models are one of the most powerful models that exist today which achieve state-of-the-art bits per dim.

Image Manipulation

Compression with Flows via Local Bits-Back Coding

1 code implementation NeurIPS 2019 Jonathan Ho, Evan Lohn, Pieter Abbeel

Likelihood-based generative models are the backbones of lossless compression due to the guaranteed existence of codes with lengths close to negative log likelihood.

Computational Efficiency

Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables

1 code implementation16 May 2019 Friso H. Kingma, Pieter Abbeel, Jonathan Ho

The bits-back argument suggests that latent variable models can be turned into lossless compression schemes.

Meta Learning Shared Hierarchies

3 code implementations ICLR 2018 Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, John Schulman

We develop a metalearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives---policies that are executed for large numbers of timesteps.

Meta-Learning Reinforcement Learning

One-Shot Imitation Learning

no code implementations NeurIPS 2017 Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, Wojciech Zaremba

A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration.

Feature Engineering Imitation Learning +1

Evolution Strategies as a Scalable Alternative to Reinforcement Learning

23 code implementations10 Mar 2017 Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever

We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients.

Atari Games Q-Learning +3

Generative Adversarial Imitation Learning

17 code implementations NeurIPS 2016 Jonathan Ho, Stefano Ermon

Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal.

Imitation Learning reinforcement-learning +2

Model-Free Imitation Learning with Policy Optimization

no code implementations26 May 2016 Jonathan Ho, Jayesh K. Gupta, Stefano Ermon

In imitation learning, an agent learns how to behave in an environment with an unknown cost function by mimicking expert demonstrations.

Imitation Learning reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.