motion retargeting
15 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in motion retargeting
Latest papers
Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD Space
Creating realistic 3D facial animation is crucial for various applications in the movie production and gaming industry, especially with the burgeoning demand in the metaverse.
Semantics2Hands: Transferring Hand Motion Semantics between Avatars
Human hands, the primary means of non-verbal communication, convey intricate semantics in various scenarios.
Pose-aware Attention Network for Flexible Motion Retargeting by Body Part
Moreover, we also show that our framework can generate reasonable results even for a more challenging retargeting scenario, like retargeting between bipedal and quadrupedal skeletons because of the body part retargeting strategy and PAN.
Skinned Motion Retargeting with Residual Perception of Motion Semantics & Geometry
Driven by our explored distance-based losses that explicitly model the motion semantics and geometry, these two modules can learn residual motion modifications on the source motion to generate plausible retargeted motion in a single inference without post-processing.
Transfer4D: A Framework for Frugal Motion Capture and Deformation Transfer
Animating a virtual character based on a real performance of an actor is a challenging task that currently requires expensive motion capture setups and additional effort by expert animators, rendering it accessible only to large production houses.
Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis
The novel design of dual branches combines the strengths of deformation-grid-based transformation and warp-free generation for better identity preservation and robustness to occlusion in the synthesized videos.
ViA: View-invariant Skeleton Action Representation Learning via Motion Retargeting
Current self-supervised approaches for skeleton action representation learning often focus on constrained scenarios, where videos and skeleton data are recorded in laboratory settings.
Learning Continuous Grasping Function with a Dexterous Hand from Human Demonstrations
We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF.
DexMV: Imitation Learning for Dexterous Manipulation from Human Videos
While significant progress has been made on understanding hand-object interactions in computer vision, it is still very challenging for robots to perform complex dexterous manipulation.
AgileGAN: stylizing portraits by inversion-consistent transfer learning
While substantial progress has been made in automated stylization, generating high quality stylistic portraits is still a challenge, and even the recent popular Toonify suffers from several artifacts when used on real input images.