Motion Synthesis

83 papers with code • 8 benchmarks • 12 datasets

Most implemented papers

Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis

papagina/auto_conditioned_rnn_motion ICLR 2018

We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN).

A Neural Temporal Model for Human Motion Prediction

cr7anand/neural_temporal_models CVPR 2019

We propose novel neural temporal models for predicting and synthesizing human motion, achieving state-of-the-art in modeling long-term motion trajectories while being competitive with prior work in short-term prediction and requiring significantly less computation.

CARL: Controllable Agent with Reinforcement Learning for Quadruped Locomotion

inventec-ai-center/carl-siggraph2020 7 May 2020

Motion synthesis in a dynamic environment has been a long-standing problem for character animation.

Unpaired Motion Style Transfer from Video to Animation

DeepMotionEditing/deep-motion-editing 12 May 2020

In this paper, we present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels, and enables transferring motion styles not observed during training.

Skeleton-Aware Networks for Deep Motion Retargeting

DeepMotionEditing/deep-motion-editing 12 May 2020

In other words, our operators form the building blocks of a new deep motion processing framework that embeds the motion into a common latent space, shared by a collection of homeomorphic skeletons.

Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows

simonalexanderson/StyleGestures Computer Graphics Forum 2020

In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters.

Bayesian Adversarial Human Motion Synthesis

rort1989/BH-HSMM CVPR 2020

By explicitly capturing the distribution of the data and parameters, our model has a more compact parameterization compared to GAN-based generative models.

Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings

jonepatr/lets_face_it 11 Jun 2020

Our contributions are: a) a method for feature extraction from multi-party video and speech recordings, resulting in a representation that allows for independent control and manipulation of expression and speech articulation in a 3D avatar; b) an extension to MoGlow, a recent motion-synthesis method based on normalizing flows, to also take multi-modal signals from the interlocutor as input and subsequently output interlocutor-aware facial gestures; and c) a subjective evaluation assessing the use and relative importance of the input modalities.

Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis

Khrylx/RFC NeurIPS 2020

Our approach is the first humanoid control method that successfully learns from a large-scale human motion dataset (Human3. 6M) and generates diverse long-term motions.

Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes

jiashunwang/Long-term-Motion-in-3D-Scenes CVPR 2021

Synthesizing 3D human motion plays an important role in many graphics applications as well as understanding human activity.