Sketch-to-Image Translation
9 papers with code • 3 benchmarks • 5 datasets
Latest papers with no code
Sketch-Guided Text-to-Image Diffusion Models
In this work, we introduce a universal approach to guide a pretrained text-to-image diffusion model, with a spatial map from another domain (e. g., sketch) during inference time.
DeepPortraitDrawing: Generating Human Body Images from Freehand Sketches
Researchers have explored various ways to generate realistic images from freehand sketches, e. g., for objects and human faces.
Flexible Portrait Image Editing with Fine-Grained Control
We develop a new method for portrait image editing, which supports fine-grained editing of geometries, colors, lights and shadows using a single neural network model.
Multi-Density Sketch-to-Image Translation Network
Sketch-to-image (S2I) translation plays an important role in image synthesis and manipulation tasks, such as photo editing and colorization.
Examining Performance of Sketch-to-Image Translation Models with Multiclass Automatically Generated Paired Training Data
Therefore, models for translation are usually trained on a set of paired training data which are carefully and laboriously designed.