Conditional Text-to-Image Synthesis

6 papers with code • 2 benchmarks • 1 datasets

Introducing extra conditions based on the text-to-image generation process, similar to the paradigm of ControlNet.

Datasets


Most implemented papers

BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion

showlab/boxdiff ICCV 2023

As such paired data is time-consuming and labor-intensive to acquire and restricted to a closed set, this potentially becomes the bottleneck for applications in an open world.

GLIGEN: Open-Set Grounded Text-to-Image Generation

gligen/GLIGEN CVPR 2023

Large-scale text-to-image diffusion models have made amazing advances.

Late-Constraint Diffusion Guidance for Controllable Image Synthesis

AlonzoLeeeooo/LCDG 19 May 2023

Specifically, we train a lightweight condition adapter to establish the correlation between external conditions and internal representations of diffusion models.

Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models

shi-labs/prompt-free-diffusion 25 May 2023

Text-to-image (T2I) research has grown explosively in the past year, owing to the large-scale pre-trained diffusion models and many emerging personalization and editing approaches.

InstanceDiffusion: Instance-level Control for Image Generation

frank-xwang/InstanceDiffusion 5 Feb 2024

Text-to-image diffusion models produce high quality images but do not offer control over individual instances in the image.

MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis

limuloo/migc 8 Feb 2024

Lastly, we aggregate all the shaded instances to provide the necessary information for accurately generating multiple instances in stable diffusion (SD).