Search Results for author: Yifang Men

Found 6 papers, 2 papers with code

DCT-Net: Domain-Calibrated Translation for Portrait Stylization

3 code implementations6 Jul 2022 Yifang Men, Yuan YAO, Miaomiao Cui, Zhouhui Lian, Xuansong Xie

This paper introduces DCT-Net, a novel image translation architecture for few-shot portrait stylization.

Few-Shot Learning Style Transfer +1

Unpaired Cartoon Image Synthesis via Gated Cycle Mapping

no code implementations CVPR 2022 Yifang Men, Yuan YAO, Miaomiao Cui, Zhouhui Lian, Xuansong Xie, Xian-Sheng Hua

Experimental results demonstrate the superiority of the proposed method over the state of the art and validate its effectiveness in the brand-new task of general cartoon image synthesis.

Image Generation Video Generation

Controllable Person Image Synthesis with Attribute-Decomposed GAN

2 code implementations CVPR 2020 Yifang Men, Yiming Mao, Yuning Jiang, Wei-Ying Ma, Zhouhui Lian

This paper introduces the Attribute-Decomposed GAN, a novel generative model for controllable person image synthesis, which can produce realistic person images with desired human attributes (e. g., pose, head, upper clothes and pants) provided in various source inputs.

Attribute Continuous Control +1

DynTypo: Example-Based Dynamic Text Effects Transfer

no code implementations CVPR 2019 Yifang Men, Zhouhui Lian, Yingmin Tang, Jianguo Xiao

In this paper, we present a novel approach for dynamic text effects transfer by using example-based texture synthesis.

Text Effects Transfer Texture Synthesis

A Common Framework for Interactive Texture Transfer

no code implementations CVPR 2018 Yifang Men, Zhouhui Lian, Yingmin Tang, Jianguo Xiao

In this paper, we present a general-purpose solution to interactive texture transfer problems that better preserves both local structure and visual richness.

Cannot find the paper you are looking for? You can Submit a new open access paper.