Search Results for author: Bingchen Liu

Found 12 papers, 6 papers with code

Diffusion Guided Domain Adaptation of Image Generators

no code implementations8 Dec 2022 Kunpeng Song, Ligong Han, Bingchen Liu, Dimitris Metaxas, Ahmed Elgammal

Can a text-to-image diffusion model be used as a training objective for adapting a GAN generator to another domain?

Domain Adaptation

Shifted Diffusion for Text-to-image Generation

no code implementations24 Nov 2022 Yufan Zhou, Bingchen Liu, Yizhe Zhu, Xiao Yang, Changyou Chen, Jinhui Xu

Unlike the baseline diffusion model used in DALL-E 2, our method seamlessly encodes prior knowledge of the pre-trained CLIP model in its diffusion process by designing a new initialization distribution and a new transition step of the diffusion.

Zero-Shot Text-to-Image Generation

PIVQGAN: Posture and Identity Disentangled Image-to-Image Translation via Vector Quantization

no code implementations29 Sep 2021 Bingchen Liu, Yizhe Zhu, Xiao Yang, Ahmed Elgammal

The VQSN module facilitates a more delicate separation of posture and identity, while the training scheme ensures the VQSN module learns the pose-related representations.

Disentanglement Image-to-Image Translation +2

Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis

6 code implementations ICLR 2021 Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal

Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images.

Image Generation

Self-Supervised Sketch-to-Image Synthesis

1 code implementation16 Dec 2020 Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal

Moreover, with the proposed sketch generator, the model shows a promising performance on style mixing and style transfer, which require synthesized images to be both style-consistent and semantically meaningful.

Image Generation Self-Supervised Learning +1

TIME: Text and Image Mutual-Translation Adversarial Networks

no code implementations27 May 2020 Bingchen Liu, Kunpeng Song, Yizhe Zhu, Gerard de Melo, Ahmed Elgammal

Focusing on text-to-image (T2I) generation, we propose Text and Image Mutual-Translation Adversarial Networks (TIME), a lightweight but effective model that jointly learns a T2I generator G and an image captioning discriminator D under the Generative Adversarial Network framework.

Image Captioning Language Modelling +2

Sketch-to-Art: Synthesizing Stylized Art Images From Sketches

1 code implementation26 Feb 2020 Bingchen Liu, Kunpeng Song, Ahmed Elgammal

We propose a new approach for synthesizing fully detailed art-stylized images from sketches.

OOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization

1 code implementation26 May 2019 Bingchen Liu, Yizhe Zhu, Zuohui Fu, Gerard de Melo, Ahmed Elgammal

Exploring the potential of GANs for unsupervised disentanglement learning, this paper proposes a novel GAN-based disentanglement framework with One-Hot Sampling and Orthogonal Regularization (OOGAN).

Disentanglement

Learning Feature-to-Feature Translator by Alternating Back-Propagation for Generative Zero-Shot Learning

1 code implementation ICCV 2019 Yizhe Zhu, Jianwen Xie, Bingchen Liu, Ahmed Elgammal

We investigate learning feature-to-feature translator networks by alternating back-propagation as a general-purpose solution to zero-shot learning (ZSL) problems.

Zero-Shot Learning

CAN: Creative Adversarial Networks, Generating "Art" by Learning About Styles and Deviating from Style Norms

9 code implementations21 Jun 2017 Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, Marian Mazzone

We argue that such networks are limited in their ability to generate creative products in their original design.

Cannot find the paper you are looking for? You can Submit a new open access paper.