In this paper, we introduce Motion Diffusion Model (MDM), a carefully adapted classifier-free diffusion-based generative model for the human motion domain.
Ranked #1 on Motion Synthesis on HumanAct12
MotionCLIP gains its unique power by aligning its latent space with that of the Contrastive Language-Image Pre-training (CLIP) model.
We compare our model to state-of-the-art methods that are not ep-free and show that in the absence of camera parameters, we outperform them by a large margin while obtaining comparable results when camera parameters are available.
Ranked #12 on 3D Human Pose Estimation on Human3.6M
Instead, we label the narrative and stance of tweets and YouTube comments about White Helmets.