1 code implementation • arXiv 2021 • BoWen Zhang, Shuyang Gu, Bo Zhang, Jianmin Bao, Dong Chen, Fang Wen, Yong Wang, Baining Guo
To this end, we believe that local attention is crucial to strike the balance between computational efficiency and modeling capacity.
Ranked #1 on
Image Generation
on CelebA-HQ 1024x1024
2 code implementations • 29 Nov 2021 • Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo
Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality.
Ranked #1 on
Text-to-Image Generation
on Oxford 102 Flowers
(using extra training data)
no code implementations • CVPR 2021 • Yue Gao, Fangyun Wei, Jianmin Bao, Shuyang Gu, Dong Chen, Fang Wen, Zhouhui Lian
However, we observe that the generator tends to find a tricky way to hide information from the original image to satisfy the constraint of cycle consistency, making it impossible to maintain the rich details (e. g., wrinkles and moles) of non-editing areas.
no code implementations • 22 Nov 2020 • Shuyang Gu, Jianmin Bao, Dong Chen
A key challenge in video enhancement and action recognition is to fuse useful information from neighboring frames.
1 code implementation • 30 Jun 2020 • Shuyang Gu, Jianmin Bao, Dong Chen, Fang Wen
To address these two issues, we propose a novel prior that captures the whole real data distribution for GANs, which are called PriorGANs.
1 code implementation • ECCV 2020 • Shuyang Gu, Jianmin Bao, Dong Chen, Fang Wen
Generative adversarial networks (GANs) have achieved impressive results today, but not all generated images are perfect.
no code implementations • CVPR 2019 • Shuyang Gu, Jianmin Bao, Hao Yang, Dong Chen, Fang Wen, Lu Yuan
Portrait editing is a popular subject in photo manipulation.
1 code implementation • CVPR 2018 • Shuyang Gu, Congliang Chen, Jing Liao, Lu Yuan
We theoretically prove that our new style loss based on reshuffle connects both global and local style losses respectively used by most parametric and non-parametric neural style transfer methods.