Search Results for author: Akbar Shah

Found 3 papers, 0 papers with code

Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning

no code implementations17 Nov 2023 Rohit Girdhar, Mannat Singh, Andrew Brown, Quentin Duval, Samaneh Azadi, Sai Saketh Rambhatla, Akbar Shah, Xi Yin, Devi Parikh, Ishan Misra

We present Emu Video, a text-to-video generation model that factorizes the generation into two steps: first generating an image conditioned on the text, and then generating a video conditioned on the text and the generated image.

Text-to-Video Generation Video Generation

Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation

no code implementations ICCV 2023 Samaneh Azadi, Akbar Shah, Thomas Hayes, Devi Parikh, Sonal Gupta

However, existing approaches are limited by their reliance on relatively small-scale motion capture data, leading to poor performance on more diverse, in-the-wild prompts.

Motion Synthesis Text-to-Video Generation +1

Text-Conditional Contextualized Avatars For Zero-Shot Personalization

no code implementations14 Apr 2023 Samaneh Azadi, Thomas Hayes, Akbar Shah, Guan Pang, Devi Parikh, Sonal Gupta

Recent large-scale text-to-image generation models have made significant improvements in the quality, realism, and diversity of the synthesized images and enable users to control the created content through language.

Text to 3D Text-to-Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.