AnimeGAN: A Novel Lightweight GAN for Photo Animation

In this paper, a novel approach for transforming photos of real-world scenes into anime style images is proposed, which is a meaningful and challenging task in computer vision and artistic style transfer. The approach we proposed combines neural style transfer and generative adversarial networks (GANs) to achieve this task. For this task, some existing methods have not achieved satisfactory animation results. The existing methods usually have some problems, among which significant problems mainly include: 1) the generated images have no obvious animated style textures; 2) the generated images lose the content of the original images; 3) the parameters of the network require the large memory capacity. In this paper, we propose a novel lightweight generative adversarial network, called AnimeGAN, to achieve fast animation style transfer. In addition, we further propose three novel loss functions to make the generated images have better animation visual effects. These loss function are grayscale style loss, grayscale adversarial loss and color reconstruction loss. The proposed AnimeGAN can be easily end-to-end trained with unpaired training data. The parameters of AnimeGAN require the lower memory capacity. Experimental results show that our method can rapidly transform real-world photos into high-quality anime images and outperforms state-of-the-art methods.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here