SketchyCOCO: Image Generation from Freehand Scene Sketches

We introduce the first method for automatic image generation from scene-level freehand sketches. Our model allows for controllable image generation by specifying the synthesis goal via freehand sketches. The key contribution is an attribute vector bridged Generative Adversarial Network called EdgeGAN, which supports high visual-quality object-level image content generation without using freehand sketches as training data. We have built a large-scale composite dataset called SketchyCOCO to support and evaluate the solution. We validate our approach on the tasks of both object-level and scene-level image generation on SketchyCOCO. Through quantitative, qualitative results, human evaluation and ablation studies, we demonstrate the method's capacity to generate realistic complex scene-level images from various freehand sketches.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


Introduced in the Paper:

SketchyCOCO

Used in the Paper:

Scribble
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sketch-to-Image Translation Scribble EdgeGAN FID 259.7 # 2
Accuracy 100% # 1
Human (%) 25.20 # 2
Sketch-to-Image Translation SketchyCOCO EdgeGAN FID 169.7 # 2
Accuracy 75.8% # 1
Human (%) 22.55 # 2

Methods


No methods listed for this paper. Add relevant methods here