Search Results for author: Xinying Guo

Found 5 papers, 2 papers with code

CrowdMoGen: Zero-Shot Text-Driven Collective Motion Generation

no code implementations8 Jul 2024 Xinying Guo, Mingyuan Zhang, Haozhe Xie, Chenyang Gu, Ziwei Liu

Crowd Motion Generation is essential in entertainment industries such as animation and games as well as in strategic fields like urban simulation and planning.

Language Modelling Large Language Model +1

Large Motion Model for Unified Multi-Modal Motion Generation

no code implementations1 Apr 2024 Mingyuan Zhang, Daisheng Jin, Chenyang Gu, Fangzhou Hong, Zhongang Cai, Jingfang Huang, Chongzhi Zhang, Xinying Guo, Lei Yang, Ying He, Ziwei Liu

In this work, we present Large Motion Model (LMM), a motion-centric, multi-modal framework that unifies mainstream motion generation tasks into a generalist model.

Motion Generation

Digital Life Project: Autonomous 3D Characters with Social Intelligence

no code implementations CVPR 2024 Zhongang Cai, Jianping Jiang, Zhongfei Qing, Xinying Guo, Mingyuan Zhang, Zhengyu Lin, Haiyi Mei, Chen Wei, Ruisi Wang, Wanqi Yin, Xiangyu Fan, Han Du, Liang Pan, Peng Gao, Zhitao Yang, Yang Gao, Jiaqi Li, Tianxiang Ren, Yukun Wei, Xiaogang Wang, Chen Change Loy, Lei Yang, Ziwei Liu

In this work, we present Digital Life Project, a framework utilizing language as the universal medium to build autonomous 3D characters, who are capable of engaging in social interactions and expressing with articulated body motions, thereby simulating life in a digital environment.

Diversity Motion Captioning +2

MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model

2 code implementations31 Aug 2022 Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, Ziwei Liu

Instead of a deterministic language-motion mapping, MotionDiffuse generates motions through a series of denoising steps in which variations are injected.

Denoising Motion Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.