Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

11 Oct 2019  ยท  Yue Gao, Yuan Guo, Zhouhui Lian, Yingmin Tang, Jianguo Xiao ยท

Automatic generation of artistic glyph images is a challenging task that attracts many research interests. Previous methods either are specifically designed for shape synthesis or focus on texture transfer. In this paper, we propose a novel model, AGIS-Net, to transfer both shape and texture styles in one-stage with only a few stylized samples. To achieve this goal, we first disentangle the representations for content and style by using two encoders, ensuring the multi-content and multi-style generation. Then we utilize two collaboratively working decoders to generate the glyph shape image and its texture image simultaneously. In addition, we introduce a local texture refinement loss to further improve the quality of the synthesized textures. In this manner, our one-stage model is much more efficient and effective than other multi-stage stacked methods. We also propose a large-scale dataset with Chinese glyph images in various shape and texture styles, rendered from 35 professional-designed artistic fonts with 7,326 characters and 2,460 synthetic artistic fonts with 639 characters, to validate the effectiveness and extendability of our method. Extensive experiments on both English and Chinese artistic glyph image datasets demonstrate the superiority of our model in generating high-quality stylized glyph images against other state-of-the-art methods.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Glyph Image Generation Chinese Glyph AGIS-Net FID 70.875 # 1
Inception score 2.1122 # 1
Pixel Accuracy 0.7035 # 1
SSIM 0.6116 # 1
Glyph Image Generation English Glyph AGIS-Net FID 73.893 # 1
Inception score 3.8151 # 1
Pixel Accuracy 0.6249 # 1
SSIM 0.7217 # 1

Methods


No methods listed for this paper. Add relevant methods here