Search Results for author: Prafulla Dhariwal

Found 16 papers, 14 papers with code

Improved Techniques for Training Consistency Models

2 code implementations22 Oct 2023 Yang song, Prafulla Dhariwal

Consistency models are a nascent family of generative models that can sample high quality data in one step without the need for adversarial training.

Image Generation

Consistency Models

9 code implementations2 Mar 2023 Yang song, Prafulla Dhariwal, Mark Chen, Ilya Sutskever

Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3. 55 on CIFAR-10 and 6. 20 on ImageNet 64x64 for one-step generation.

Colorization Image Inpainting +2

Point-E: A System for Generating 3D Point Clouds from Complex Prompts

1 code implementation16 Dec 2022 Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, Mark Chen

This is in stark contrast to state-of-the-art generative image models, which produce samples in a number of seconds or minutes.

Generating 3D Point Clouds

Hierarchical Text-Conditional Image Generation with CLIP Latents

7 code implementations13 Apr 2022 Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen

Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style.

Ranked #28 on Text-to-Image Generation on MS COCO (using extra training data)

Conditional Image Generation Zero-Shot Text-to-Image Generation

GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

2 code implementations20 Dec 2021 Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, Mark Chen

Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity.

Ranked #33 on Text-to-Image Generation on MS COCO (using extra training data)

Image Inpainting Zero-Shot Text-to-Image Generation

Diffusion Models Beat GANs on Image Synthesis

18 code implementations NeurIPS 2021 Prafulla Dhariwal, Alex Nichol

Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3. 94 on ImageNet 256$\times$256 and 3. 85 on ImageNet 512$\times$512.

Conditional Image Generation

Improved Denoising Diffusion Probabilistic Models

10 code implementations18 Feb 2021 Alex Nichol, Prafulla Dhariwal

Denoising diffusion probabilistic models (DDPM) are a class of generative models which have recently been shown to produce excellent samples.

Ranked #5 on Image Generation on CIFAR-10 (FD metric)

Denoising Image Generation

Generative Pretraining from Pixels

4 code implementations ICML 2020 Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, Ilya Sutskever

Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images.

Ranked #15 on Image Classification on STL-10 (using extra training data)

Representation Learning Self-Supervised Image Classification

Glow: Generative Flow with Invertible 1x1 Convolutions

27 code implementations NeurIPS 2018 Diederik P. Kingma, Prafulla Dhariwal

Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis.

Density Estimation Image Generation

GamePad: A Learning Environment for Theorem Proving

1 code implementation ICLR 2019 Daniel Huang, Prafulla Dhariwal, Dawn Song, Ilya Sutskever

In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant.

Automated Theorem Proving Position

Proximal Policy Optimization Algorithms

171 code implementations20 Jul 2017 John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov

We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent.

Continuous Control Dota 2 +3

Variational Lossy Autoencoder

no code implementations8 Nov 2016 Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, Pieter Abbeel

Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification.

Density Estimation Image Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.