Search Results for author: Zhiyang Dou

Found 10 papers, 2 papers with code

LaserHuman: Language-guided Scene-aware Human Motion Generation in Free Environment

1 code implementation20 Mar 2024 Peishan Cong, Ziyi Wang, Zhiyang Dou, Yiming Ren, Wei Yin, Kai Cheng, Yujing Sun, Xiaoxiao Long, Xinge Zhu, Yuexin Ma

Language-guided scene-aware human motion generation has great significance for entertainment and robotics.

Disentangled Clothed Avatar Generation from Text Descriptions

no code implementations8 Dec 2023 Jionghao Wang, YuAn Liu, Zhiyang Dou, Zhengming Yu, Yongqing Liang, Xin Li, Wenping Wang, Rong Xie, Li Song

In this paper, we introduced a novel text-to-avatar generation method that separately generates the human body and the clothes and allows high-quality animation on the generated avatar.

Virtual Try-on

Boosting Segment Anything Model Towards Open-Vocabulary Learning

1 code implementation6 Dec 2023 Xumeng Han, Longhui Wei, Xuehui Yu, Zhiyang Dou, Xin He, Kuiran Wang, Zhenjun Han, Qi Tian

The recent Segment Anything Model (SAM) has emerged as a new paradigmatic vision foundation model, showcasing potent zero-shot generalization and flexible prompting.

Object Object Localization +2

TLControl: Trajectory and Language Control for Human Motion Synthesis

no code implementations28 Nov 2023 Weilin Wan, Zhiyang Dou, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu

Controllable human motion synthesis is essential for applications in AR/VR, gaming, movies, and embodied AI.

Motion Synthesis

Wonder3D: Single Image to 3D using Cross-Domain Diffusion

no code implementations23 Oct 2023 Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, YuAn Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, Wenping Wang

In this work, we introduce Wonder3D, a novel method for efficiently generating high-fidelity textured meshes from single-view images. Recent methods based on Score Distillation Sampling (SDS) have shown the potential to recover 3D geometry from 2D diffusion priors, but they typically suffer from time-consuming per-shape optimization and inconsistent geometry.

Image to 3D

C$\cdot$ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters

no code implementations20 Sep 2023 Zhiyang Dou, Xuelin Chen, Qingnan Fan, Taku Komura, Wenping Wang

We present C$\cdot$ASE, an efficient and effective framework that learns conditional Adversarial Skill Embeddings for physics-based characters.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.