Search Results for author: Zhongqian Sun

Found 9 papers, 4 papers with code

GUESS:GradUally Enriching SyntheSis for Text-Driven Human Motion Generation

1 code implementation4 Jan 2024 Xuehao Gao, Yang Yang, Zhenyu Xie, Shaoyi Du, Zhongqian Sun, Yang Wu

The whole text-driven human motion synthesis problem is then divided into multiple abstraction levels and solved with a multi-stage generation framework with a cascaded latent diffusion model: an initial generator first generates the coarsest human motion guess from a given text description; then, a series of successive generators gradually enrich the motion details based on the textual description and the previous synthesized results.

Motion Generation Motion Synthesis

Towards Detailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model

no code implementations18 Dec 2023 Zhenyu Xie, Yang Wu, Xuehao Gao, Zhongqian Sun, Wei Yang, Xiaodan Liang

Besides, we introduce a multi-denoiser framework for the advanced diffusion model to ease the learning of high-dimensional model and fully explore the generative potential of the diffusion model.

Denoising Motion Synthesis

Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video

1 code implementation ICCV 2023 Xiuzhe Wu, Pengfei Hu, Yang Wu, Xiaoyang Lyu, Yan-Pei Cao, Ying Shan, Wenming Yang, Zhongqian Sun, Xiaojuan Qi

Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training.

Image Generation

NOFA: NeRF-based One-shot Facial Avatar Reconstruction

no code implementations7 Jul 2023 Wangbo Yu, Yanbo Fan, Yong Zhang, Xuan Wang, Fei Yin, Yunpeng Bai, Yan-Pei Cao, Ying Shan, Yang Wu, Zhongqian Sun, Baoyuan Wu

In this work, we propose a one-shot 3D facial avatar reconstruction framework that only requires a single source image to reconstruct a high-fidelity 3D facial avatar.

Decoder

EE-TTS: Emphatic Expressive TTS with Linguistic Information

no code implementations20 May 2023 Yi Zhong, Chen Zhang, Xule Liu, Chenxi Sun, Weishan Deng, Haifeng Hu, Zhongqian Sun

EE-TTS contains an emphasis predictor that can identify appropriate emphasis positions from text and a conditioned acoustic model to synthesize expressive speech with emphasis and linguistic information.

Maximum Entropy Population-Based Training for Zero-Shot Human-AI Coordination

2 code implementations22 Dec 2021 Rui Zhao, Jinming Song, Yufeng Yuan, Hu Haifeng, Yang Gao, Yi Wu, Zhongqian Sun, Yang Wei

We study the problem of training a Reinforcement Learning (RL) agent that is collaborative with humans without using any human data.

Diversity Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.