no code implementations • 23 Oct 2023 • Roy Kapon, Guy Tevet, Daniel Cohen-Or, Amit H. Bermano
We introduce Multi-view Ancestral Sampling (MAS), a method for 3D motion generation, using 2D diffusion models that were trained on motions obtained from in-the-wild videos.
2 code implementations • 2 Mar 2023 • Yonatan Shafir, Guy Tevet, Roy Kapon, Amit H. Bermano
We evaluate the composition methods using an off-the-shelf motion diffusion model, and further compare the results to dedicated models trained for these specific tasks.
Ranked #4 on Motion Synthesis on InterHuman
1 code implementation • 12 Feb 2023 • Sigal Raab, Inbal Leibovitch, Guy Tevet, Moab Arar, Amit H. Bermano, Daniel Cohen-Or
We harness the power of diffusion models and present a denoising network explicitly designed for the task of learning from a single input motion.
1 code implementation • 29 Sep 2022 • Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, Amit H. Bermano
In this paper, we introduce Motion Diffusion Model (MDM), a carefully adapted classifier-free diffusion-based generative model for the human motion domain.
Ranked #1 on Motion Synthesis on HumanAct12
1 code implementation • 15 Mar 2022 • Guy Tevet, Brian Gordon, Amir Hertz, Amit H. Bermano, Daniel Cohen-Or
MotionCLIP gains its unique power by aligning its latent space with that of the Contrastive Language-Image Pre-training (CLIP) model.
1 code implementation • EACL 2021 • Guy Tevet, Jonathan Berant
Despite growing interest in natural language generation (NLG) models that produce diverse outputs, there is currently no principled method for evaluating the diversity of an NLG system.
1 code implementation • NAACL 2019 • Guy Tevet, Gavriel Habib, Vered Shwartz, Jonathan Berant
Generative Adversarial Networks (GANs) are a promising approach for text generation that, unlike traditional language models (LM), does not suffer from the problem of ``exposure bias''.