1 code implementation • 2 Sep 2024 • Zhangsihao Yang, Mengyi Shan, Mohammad Farazi, Wenhui Zhu, Yanxi Chen, Xuanzhao Dong, Yalin Wang
Human video generation task has gained significant attention with the advancement of deep generative models.
no code implementations • 28 May 2024 • Mengyi Shan, Lu Dong, Yutao Han, Yuan YAO, Tao Liu, Ifeoma Nwogu, Guo-Jun Qi, Mitch Hill
To our knowledge, our method is the first to generate multi-subject motion sequences with high diversity and fidelity from a large variety of textual prompts.
no code implementations • CVPR 2024 • Zhangsihao Yang, Mingyuan Zhou, Mengyi Shan, Bingbing Wen, Ziwei Xuan, Mitch Hill, Junjie Bai, Guo-Jun Qi, Yalin Wang
Our paper aims to generate diverse and realistic animal motion sequences from textual descriptions, without a large-scale animal text-motion dataset.
no code implementations • 12 Oct 2023 • Mengyi Shan, Brian Curless, Ira Kemelmacher-Shlizerman, Steve Seitz
We present a system that automatically brings street view imagery to life by populating it with naturally behaving, animated pedestrians and vehicles.
1 code implementation • CVPR 2022 • Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, Ira Kemelmacher-Shlizerman
We introduce a high resolution, 3D-consistent image and shape generation technique which we call StyleSDF.