Consistent Character Generation
5 papers with code • 0 benchmarks • 0 datasets
Given a text prompt describing a character, we need to distill a representation that enables consistent depiction of the same character in novel contexts.
Benchmarks
These leaderboards are used to track progress in Consistent Character Generation
Most implemented papers
The Chosen One: Consistent Characters in Text-to-Image Diffusion Models
Recent advances in text-to-image generation models have unlocked vast potential for visual creativity.
CAT: Contrastive Adapter Training for Personalized Image Generation
Finally, we mention the possibility of CAT in the aspects of multi-concept adapter and optimization.
CharacterFactory: Sampling Consistent Characters with GANs for Diffusion Models
In this work, we propose CharacterFactory, a framework that allows sampling new characters with consistent identities in the latent space of GANs for diffusion models.
Ada-VE: Training-Free Consistent Video Editing Using Adaptive Motion Prior
This enables a greater number of cross-frame attentions over more frames within the same computational budget, thereby enhancing both video quality and temporal coherence.
Character-Adapter: Prompt-Guided Region Control for High-Fidelity Character Customization
Customized image generation, which seeks to synthesize images with consistent characters, holds significant relevance for applications such as storytelling, portrait generation, and character design.