Search Results for author: Xudong Mao

Found 11 papers, 10 papers with code

Collaborative Learning of Bidirectional Decoders for Unsupervised Text Style Transfer

1 code implementation EMNLP 2021 Yun Ma, Yangbin Chen, Xudong Mao, Qing Li

In this paper, we propose a collaborative learning framework for unsupervised text style transfer using a pair of bidirectional decoders, one decoding from left to right while the other decoding from right to left.

Attribute Knowledge Distillation +3

Cross Initialization for Personalized Text-to-Image Generation

1 code implementation26 Dec 2023 Lianyu Pang, Jian Yin, Haoran Xie, Qiping Wang, Qing Li, Xudong Mao

Additionally, a fast version of our method allows for capturing an input image in roughly 26 seconds, while surpassing the baseline methods in terms of both reconstruction and editability.

Text-to-Image Generation

Cycle Encoding of a StyleGAN Encoder for Improved Reconstruction and Editability

1 code implementation19 Jul 2022 Xudong Mao, Liujuan Cao, Aurele T. Gnanha, Zhenguo Yang, Qing Li, Rongrong Ji

The recently proposed pivotal tuning model makes significant progress towards reconstruction and editability, by using a two-step approach that first inverts the input image into a latent code, called pivot code, and then alters the generator so that the input image can be accurately mapped into the pivot code.

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme

1 code implementation NeurIPS 2021 Shaojie Li, Jie Wu, Xuefeng Xiao, Fei Chao, Xudong Mao, Rongrong Ji

In this work, we revisit the role of discriminator in GAN compression and design a novel generator-discriminator cooperative compression scheme for GAN compression, termed GCC.

Image-to-image Translation via Hierarchical Style Disentanglement

1 code implementation CVPR 2021 Xinyang Li, Shengchuan Zhang, Jie Hu, Liujuan Cao, Xiaopeng Hong, Xudong Mao, Feiyue Huang, Yongjian Wu, Rongrong Ji

Recently, image-to-image translation has made significant progress in achieving both multi-label (\ie, translation conditioned on different labels) and multi-style (\ie, generation with diverse styles) tasks.

Disentanglement Multimodal Unsupervised Image-To-Image Translation +1

Virtual Mixup Training for Unsupervised Domain Adaptation

4 code implementations10 May 2019 Xudong Mao, Yun Ma, Zhenguo Yang, Yangbin Chen, Qing Li

Existing methods only impose the locally-Lipschitz constraint around the training points while miss the other areas, such as the points in-between training data.

Unsupervised Domain Adaptation

Unpaired Multi-Domain Image Generation via Regularized Conditional GANs

1 code implementation7 May 2018 Xudong Mao, Qing Li

To tackle this problem, we propose Regularized Conditional GAN (RegCGAN) which is capable of learning to generate corresponding images in the absence of paired training data.

Image Generation Unsupervised Domain Adaptation

On the Effectiveness of Least Squares Generative Adversarial Networks

2 code implementations18 Dec 2017 Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, Zhen Wang, Stephen Paul Smolley

To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss for both the discriminator and the generator.

AlignGAN: Learning to Align Cross-Domain Images with Conditional Generative Adversarial Networks

no code implementations5 Jul 2017 Xudong Mao, Qing Li, Haoran Xie

Recently, several methods based on generative adversarial network (GAN) have been proposed for the task of aligning cross-domain images or learning a joint distribution of cross-domain images.

Generative Adversarial Network

Least Squares Generative Adversarial Networks

23 code implementations ICCV 2017 Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, Zhen Wang, Stephen Paul Smolley

To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator.

Cannot find the paper you are looking for? You can Submit a new open access paper.