1 code implementation • 20 Mar 2024 • Peishan Cong, Ziyi Wang, Zhiyang Dou, Yiming Ren, Wei Yin, Kai Cheng, Yujing Sun, Xiaoxiao Long, Xinge Zhu, Yuexin Ma
Language-guided scene-aware human motion generation has great significance for entertainment and robotics.
no code implementations • 23 Jan 2024 • Zimeng Wang, Zhiyang Dou, Rui Xu, Cheng Lin, YuAn Liu, Xiaoxiao Long, Shiqing Xin, Taku Komura, Xiaoming Yuan, Wenping Wang
We introduce Coverage Axis++, a novel and efficient approach to 3D shape skeletonization.
no code implementations • 8 Dec 2023 • Jionghao Wang, YuAn Liu, Zhiyang Dou, Zhengming Yu, Yongqing Liang, Xin Li, Wenping Wang, Rong Xie, Li Song
In this paper, we introduced a novel text-to-avatar generation method that separately generates the human body and the clothes and allows high-quality animation on the generated avatar.
1 code implementation • 6 Dec 2023 • Xumeng Han, Longhui Wei, Xuehui Yu, Zhiyang Dou, Xin He, Kuiran Wang, Zhenjun Han, Qi Tian
The recent Segment Anything Model (SAM) has emerged as a new paradigmatic vision foundation model, showcasing potent zero-shot generalization and flexible prompting.
no code implementations • 4 Dec 2023 • Wenyang Zhou, Zhiyang Dou, Zeyu Cao, Zhouyingcheng Liao, Jingbo Wang, Wenjia Wang, YuAn Liu, Taku Komura, Wenping Wang, Lingjie Liu
We introduce Efficient Motion Diffusion Model (EMDM) for fast and high-quality human motion generation.
no code implementations • 28 Nov 2023 • Zhengming Yu, Zhiyang Dou, Xiaoxiao Long, Cheng Lin, Zekun Li, YuAn Liu, Norman Müller, Taku Komura, Marc Habermann, Christian Theobalt, Xin Li, Wenping Wang
The experiments demonstrate the superior performance of Surf-D in shape generation across multiple modalities as conditions.
no code implementations • 28 Nov 2023 • Weilin Wan, Zhiyang Dou, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu
Controllable human motion synthesis is essential for applications in AR/VR, gaming, movies, and embodied AI.
no code implementations • 23 Oct 2023 • Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, YuAn Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, Wenping Wang
In this work, we introduce Wonder3D, a novel method for efficiently generating high-fidelity textured meshes from single-view images. Recent methods based on Score Distillation Sampling (SDS) have shown the potential to recover 3D geometry from 2D diffusion priors, but they typically suffer from time-consuming per-shape optimization and inconsistent geometry.
no code implementations • 20 Sep 2023 • Zhiyang Dou, Xuelin Chen, Qingnan Fan, Taku Komura, Wenping Wang
We present C$\cdot$ASE, an efficient and effective framework that learns conditional Adversarial Skill Embeddings for physics-based characters.
no code implementations • ICCV 2023 • Zhiyang Dou, Qingxuan Wu, Cheng Lin, Zeyu Cao, Qiangqiang Wu, Weilin Wan, Taku Komura, Wenping Wang
We further demonstrate the generalizability of our method on hand mesh recovery.