no code implementations • 5 Dec 2024 • Zhouyingcheng Liao, Mingyuan Zhang, Wenjia Wang, Lei Yang, Taku Komura
While motion generation has made substantial progress, its practical application remains constrained by dataset diversity and scale, limiting its ability to handle out-of-distribution scenarios.
no code implementations • 3 Dec 2024 • Mingyi Shi, Dafei Qin, Leo Ho, Zhouyingcheng Liao, Yinghao Huang, Junichi Yamagishi, Taku Komura
To the best of our knowledge, this is the first system capable of generating interactive full-body motions for two characters from speech in an online manner.
no code implementations • 29 Nov 2024 • Wenjia Wang, Liang Pan, Zhiyang Dou, Zhouyingcheng Liao, Yuke Lou, Lei Yang, Jingbo Wang, Taku Komura
On the one hand, films and shows with stylish human locomotions or interactions with scenes are abundantly available on the internet, providing a rich source of data for script planning.
no code implementations • 17 Jul 2024 • Zhouyingcheng Liao, Sinan Wang, Taku Komura
We present SENC, a novel self-supervised neural cloth simulator that addresses the challenge of cloth self-collision.
1 code implementation • 4 Dec 2023 • Wenyang Zhou, Zhiyang Dou, Zeyu Cao, Zhouyingcheng Liao, Jingbo Wang, Wenjia Wang, YuAn Liu, Taku Komura, Wenping Wang, Lingjie Liu
We introduce Efficient Motion Diffusion Model (EMDM) for fast and high-quality human motion generation.
Ranked #8 on
Motion Synthesis
on KIT Motion-Language
no code implementations • CVPR 2024 • Zhouyingcheng Liao, Vladislav Golyanik, Marc Habermann, Christian Theobalt
However, the former methods typically predict solely static skinning weights, which perform poorly for highly articulated poses, and the latter ones either require dense 3D character scans in different poses or cannot generate an explicit mesh with vertex correspondence over time.
1 code implementation • 28 Jul 2022 • Zhouyingcheng Liao, Jimei Yang, Jun Saito, Gerard Pons-Moll, Yang Zhou
We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging.
2 code implementations • CVPR 2020 • Chaitanya Patel, Zhouyingcheng Liao, Gerard Pons-Moll
While the low-frequency component is predicted from pose, shape and style parameters with an MLP, the high-frequency component is predicted with a mixture of shape-style specific pose models.