2 code implementations • 23 Mar 2023 • Troy Luhman, Eric Luhman
With this method, the VAE avoids modeling the fine-grained details that constitute the majority of the image's code length, allowing it to focus on learning its structural components.
1 code implementation • 18 Oct 2022 • Eric Luhman, Troy Luhman
To address this, we introduce a KL-reweighting strategy to control the amount of infor mation in each latent group, and employ a Gaussian output layer to reduce sharpness in the learning objective.
3 code implementations • 9 Jul 2022 • Troy Luhman, Eric Luhman
Diffusion models are a powerful class of generative models that iteratively denoise samples to produce data.
2 code implementations • 7 Jan 2021 • Eric Luhman, Troy Luhman
Iterative generative models, such as noise conditional score networks and denoising diffusion probabilistic models, produce high quality samples by gradually denoising an initial noise vector.
Ranked #85 on Image Generation on CIFAR-10
2 code implementations • 13 Nov 2020 • Troy Luhman, Eric Luhman
In this paper, we propose a diffusion probabilistic model for handwriting generation.