1 code implementation • 15 Dec 2023 • Jiajun Zhang, Yuxiang Zhang, Hongwen Zhang, Xiao Zhou, Boyao Zhou, Ruizhi Shao, Zonghai Hu, Yebin Liu
To address this, we further propose a complementary training strategy that leverages synthetic data to introduce instance-level shape priors, enabling the disentanglement of occupancy fields for different instances.
no code implementations • 10 Dec 2023 • Yi Wang, Jian Ma, Ruizhi Shao, Qiao Feng, Yu-Kun Lai, Yebin Liu, Kun Li
To keep the generated clothing consistent with the target text, we propose a semantic-confidence strategy for clothing that can eliminate the non-clothing content generated by the model.
1 code implementation • 4 Dec 2023 • Shunyuan Zheng, Boyao Zhou, Ruizhi Shao, Boning Liu, Shengping Zhang, Liqiang Nie, Yebin Liu
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
1 code implementation • 25 Oct 2023 • Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, Yebin Liu
The score distillation from this 3D-aware diffusion prior provides view-consistent guidance for the scene.
no code implementations • 2 Oct 2023 • Xin Huang, Ruizhi Shao, Qi Zhang, Hongwen Zhang, Ying Feng, Yebin Liu, Qing Wang
The main idea is to enhance the model's 2D perception of 3D geometry by learning a normal-adapted diffusion model and a normal-aligned diffusion model.
no code implementations • 31 May 2023 • Ruizhi Shao, Jingxiang Sun, Cheng Peng, Zerong Zheng, Boyao Zhou, Hongwen Zhang, Yebin Liu
We introduce Control4D, an innovative framework for editing dynamic 4D portraits using text instructions.
no code implementations • CVPR 2023 • Hongwen Zhang, Siyou Lin, Ruizhi Shao, Yuxiang Zhang, Zerong Zheng, Han Huang, Yandong Guo, Yebin Liu
In this way, the clothing deformations are disentangled such that the pose-dependent wrinkles can be better learned and applied to unseen poses.
no code implementations • CVPR 2023 • Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu
The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.
1 code implementation • 21 Nov 2022 • Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, Yebin Liu
The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor.
no code implementations • 16 Jul 2022 • Ruizhi Shao, Zerong Zheng, Hongwen Zhang, Jingxiang Sun, Yebin Liu
At its core is a novel diffusion-based stereo module, which introduces diffusion models, a type of powerful generative models, into the iterative stereo matching network.
1 code implementation • 14 Jul 2022 • Siyou Lin, Hongwen Zhang, Zerong Zheng, Ruizhi Shao, Yebin Liu
We present FITE, a First-Implicit-Then-Explicit framework for modeling human avatars in clothing.
no code implementations • 20 Jan 2022 • Tiansong Zhou, Jing Huang, Tao Yu, Ruizhi Shao, Kun Li
To this end, we propose HDhuman, which uses a human reconstruction network with a pixel-aligned spatial transformer and a rendering network with geometry-guided pixel-wise feature integration to achieve high-quality human reconstruction and rendering.
no code implementations • ICCV 2021 • Ruizhi Shao, Gaochang Wu, Yuemei Zhou, Ying Fu, Yebin Liu
By combining the local transformer with the multiscale structure, the network is able to capture long-short range correspondences efficiently and accurately.
no code implementations • CVPR 2022 • Ruizhi Shao, Hongwen Zhang, He Zhang, Mingjia Chen, YanPei Cao, Tao Yu, Yebin Liu
We introduce DoubleField, a novel framework combining the merits of both surface field and radiance field for high-fidelity human reconstruction and rendering.
no code implementations • ICCV 2021 • Yang Zheng, Ruizhi Shao, Yuxiang Zhang, Tao Yu, Zerong Zheng, Qionghai Dai, Yebin Liu
We propose DeepMultiCap, a novel method for multi-person performance capture using sparse multi-view cameras.