Image Generation Models

Guided Language to Image Diffusion for Generation and Editing

Introduced by Nichol et al. in GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

GLIDE is a generative model based on text-guided diffusion models for more photorealistic image generation. Guided diffusion is applied to text-conditional image synthesis and the model is able to handle free-form prompts. The diffusion model uses a text encoder to condition on natural language descriptions. The model is provided with editing capabilities in addition to zero-shot generation, allowing for iterative improvement of model samples to match more complex prompts. The model is fine-tuned to perform image inpainting.

Source: GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories