no code implementations • 29 May 2023 • Dingdong Yang, Yizhi Wang, Ali Mahdavi-Amiri, Hao Zhang
Our key codes and feature grids are jointly trained continuously with well-defined gradient flows, leading to high usage rates of the feature grids and improved generative modeling compared to discrete Vector Quantization (VQ).
1 code implementation • 26 Nov 2022 • Jianfeng Zhang, Zihang Jiang, Dingdong Yang, Hongyi Xu, Yichun Shi, Guoxian Song, Zhongcong Xu, Xinchao Wang, Jiashi Feng
Specifically, we decompose the generative 3D human synthesis into pose-guided mapping and canonical representation with predefined human pose and shape, such that the canonical representation can be explicitly driven to different poses and shapes with the guidance of a 3D parametric human model SMPL.
1 code implementation • 1 Aug 2022 • Jianfeng Zhang, Zihang Jiang, Dingdong Yang, Hongyi Xu, Yichun Shi, Guoxian Song, Zhongcong Xu, Xinchao Wang, Jiashi Feng
Unsupervised generation of clothed virtual humans with various appearance and animatable poses is important for creating 3D human avatars and other AR/VR applications.
1 code implementation • 7 Jul 2022 • Yuzhi Zhao, Yongzhe Xu, Qiong Yan, Dingdong Yang, Xuehui Wang, Lai-Man Po
Night imaging with modern smartphone cameras is troublesome due to low photon count and unavoidable noise in the imaging system.
no code implementations • ICLR 2019 • Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, Honglak Lee
We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN).
no code implementations • CVPR 2018 • Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee
We propose a novel hierarchical approach for text-to-image synthesis by inferring semantic layout.