Unconditional Image Generation
30 papers with code • 4 benchmarks • 3 datasets
Most implemented papers
Recursive Reasoning in Minimax Games: A Level $k$ Gradient Play Method
Despite the success of generative adversarial networks (GANs) in generating visually appealing images, they are notoriously challenging to train.
MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis
In this work, we propose MAsked Generative Encoder (MAGE), the first framework to unify SOTA image generation and self-supervised representation learning.
RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation
In this paper, we present RenderDiffusion, the first diffusion model for 3D generation and inference, trained using only monocular 2D supervision.
Fast Inference in Denoising Diffusion Models via MMD Finetuning
Our findings show that the proposed method is able to produce high-quality samples in a fraction of the time required by widely-used diffusion models, and outperforms state-of-the-art techniques for accelerated sampling.
Denoising Diffusion Autoencoders are Unified Self-supervised Learners
Inspired by recent advances in diffusion models, which are reminiscent of denoising autoencoders, we investigate whether they can acquire discriminative representations for classification via generative pre-training.
Diffusion Models for Constrained Domains
Denoising diffusion models are a novel class of generative algorithms that achieve state-of-the-art performance across a range of domains, including image generation and text-to-image tasks.
Return of Unconditional Generation: A Self-supervised Representation Generation Method
This gap can be attributed to the lack of semantic information provided by labels.
WDM: 3D Wavelet Diffusion Models for High-Resolution Medical Image Synthesis
Due to the three-dimensional nature of CT- or MR-scans, generative modeling of medical images is a particularly challenging task.
Diversity-aware Channel Pruning for StyleGAN Compression
Specifically, by assessing channel importance based on their sensitivities to latent vector perturbations, our method enhances the diversity of samples in the compressed model.
Diffusion-RWKV: Scaling RWKV-Like Architectures for Diffusion Models
Transformers have catalyzed advancements in computer vision and natural language processing (NLP) fields.