Search Results for author: Daniel Watson

Found 8 papers, 1 papers with code

Video Interpolation with Diffusion Models

1 code implementation1 Apr 2024 Siddhant Jain, Daniel Watson, Eric Tabellion, Aleksander Hołyński, Ben Poole, Janne Kontkanen

We present VIDIM, a generative model for video interpolation, which creates short videos given a start and end frame.

Super-Resolution

Denoising Diffusion Probabilistic Models for Robust Image Super-Resolution in the Wild

no code implementations15 Feb 2023 Hshmat Sahak, Daniel Watson, Chitwan Saharia, David Fleet

Diffusion models have shown promising results on single-image super-resolution and other image- to-image translation tasks.

Blind Super-Resolution Denoising +2

Novel View Synthesis with Diffusion Models

no code implementations6 Oct 2022 Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, Mohammad Norouzi

We demonstrate that stochastic conditioning significantly improves the 3D consistency of a naive sampler for an image-to-image diffusion model, which involves conditioning on a single fixed view.

Denoising Novel View Synthesis

Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality

no code implementations11 Feb 2022 Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi

We introduce Differentiable Diffusion Sampler Search (DDSS): a method that optimizes fast samplers for any pre-trained diffusion model by differentiating through sample quality scores.

Image Generation Unconditional Image Generation

Optimizing Few-Step Diffusion Samplers by Gradient Descent

no code implementations ICLR 2022 Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi

We propose Generalized Gaussian Diffusion Processes (GGDP), a family of non-Markovian samplers for diffusion models, and we show how to improve the generated samples of pre-trained DDPMs by optimizing the degrees of freedom of the GGDP sampler family with respect to a perceptual loss.

Denoising Image Generation +1

Learning to Efficiently Sample from Diffusion Probabilistic Models

no code implementations7 Jun 2021 Daniel Watson, Jonathan Ho, Mohammad Norouzi, William Chan

Key advantages of DDPMs include ease of training, in contrast to generative adversarial networks, and speed of generation, in contrast to autoregressive models.

Denoising Speech Synthesis

Utilizing Character and Word Embeddings for Text Normalization with Sequence-to-Sequence Models

no code implementations EMNLP 2018 Daniel Watson, Nasser Zalmout, Nizar Habash

We show that providing the model with word-level features bridges the gap for the neural network approach to achieve a state-of-the-art F1 score on a standard Arabic language correction shared task dataset.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.