Motion Synthesis
83 papers with code • 8 benchmarks • 12 datasets
Datasets
Most implemented papers
Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis
We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN).
A Neural Temporal Model for Human Motion Prediction
We propose novel neural temporal models for predicting and synthesizing human motion, achieving state-of-the-art in modeling long-term motion trajectories while being competitive with prior work in short-term prediction and requiring significantly less computation.
CARL: Controllable Agent with Reinforcement Learning for Quadruped Locomotion
Motion synthesis in a dynamic environment has been a long-standing problem for character animation.
Unpaired Motion Style Transfer from Video to Animation
In this paper, we present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels, and enables transferring motion styles not observed during training.
Skeleton-Aware Networks for Deep Motion Retargeting
In other words, our operators form the building blocks of a new deep motion processing framework that embeds the motion into a common latent space, shared by a collection of homeomorphic skeletons.
Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows
In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters.
Bayesian Adversarial Human Motion Synthesis
By explicitly capturing the distribution of the data and parameters, our model has a more compact parameterization compared to GAN-based generative models.
Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings
Our contributions are: a) a method for feature extraction from multi-party video and speech recordings, resulting in a representation that allows for independent control and manipulation of expression and speech articulation in a 3D avatar; b) an extension to MoGlow, a recent motion-synthesis method based on normalizing flows, to also take multi-modal signals from the interlocutor as input and subsequently output interlocutor-aware facial gestures; and c) a subjective evaluation assessing the use and relative importance of the input modalities.
Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis
Our approach is the first humanoid control method that successfully learns from a large-scale human motion dataset (Human3. 6M) and generates diverse long-term motions.
Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes
Synthesizing 3D human motion plays an important role in many graphics applications as well as understanding human activity.