motion retargeting
15 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in motion retargeting
Latest papers with no code
Unsupervised Motion Retargeting for Human-Robot Imitation
This early-stage research work aims to improve online human-robot imitation by translating sequences of joint positions from the domain of human motions to a domain of motions achievable by a given robot, thus constrained by its embodiment.
Semantics-aware Motion Retargeting with Vision-Language Models
Capturing and preserving motion semantics is essential to motion retargeting between animation characters.
Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior
Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist-created poses to noisy poses estimated directly from images.
ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space
Additionally, we propose a consistency term to build a common latent space that captures the similarity of the poses with precision while allowing direct robot motion control from the latent space.
Physics-based Motion Retargeting from Sparse Inputs
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
HMC: Hierarchical Mesh Coarsening for Skeleton-free Motion Retargeting
We present a simple yet effective method for skeleton-free motion retargeting.
Correspondence-free online human motion retargeting
We present a data-driven framework for unsupervised human motion retargeting that animates a target subject with the motion of a source subject.
An Identity-Preserved Framework for Human Motion Transfer
Although previous methods have achieved good results in synthesizing good-quality videos, they lose sight of individualized motion information from the source and target motions, which is significant for the realism of the motion in the generated video.
H4D: Human 4D Modeling by Learning Neural Compositional Representation
A simple yet effective linear motion model is proposed to provide a rough and regularized motion estimation, followed by per-frame compensation for pose and geometry details with the residual encoded in the auxiliary code.
Neural Marionette: Unsupervised Learning of Motion Skeleton and Latent Dynamics from Volumetric Video
We present Neural Marionette, an unsupervised approach that discovers the skeletal structure from a dynamic sequence and learns to generate diverse motions that are consistent with the observed motion dynamics.