Search Results for author: Mengyi Shan

Found 5 papers, 2 papers with code

AMG: Avatar Motion Guided Video Generation

1 code implementation2 Sep 2024 Zhangsihao Yang, Mengyi Shan, Mohammad Farazi, Wenhui Zhu, Yanxi Chen, Xuanzhao Dong, Yalin Wang

Human video generation task has gained significant attention with the advancement of deep generative models.

Video Generation

Towards Open Domain Text-Driven Synthesis of Multi-Person Motions

no code implementations28 May 2024 Mengyi Shan, Lu Dong, Yutao Han, Yuan YAO, Tao Liu, Ifeoma Nwogu, Guo-Jun Qi, Mitch Hill

To our knowledge, our method is the first to generate multi-subject motion sequences with high diversity and fidelity from a large variety of textual prompts.

Diversity Motion Generation

OmniMotionGPT: Animal Motion Generation with Limited Data

no code implementations CVPR 2024 Zhangsihao Yang, Mingyuan Zhou, Mengyi Shan, Bingbing Wen, Ziwei Xuan, Mitch Hill, Junjie Bai, Guo-Jun Qi, Yalin Wang

Our paper aims to generate diverse and realistic animal motion sequences from textual descriptions, without a large-scale animal text-motion dataset.

Diversity Motion Generation +1

Animating Street View

no code implementations12 Oct 2023 Mengyi Shan, Brian Curless, Ira Kemelmacher-Shlizerman, Steve Seitz

We present a system that automatically brings street view imagery to life by populating it with naturally behaving, animated pedestrians and vehicles.

Cannot find the paper you are looking for? You can Submit a new open access paper.