motion retargeting
12 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in motion retargeting
Most implemented papers
C-3PO: Cyclic-Three-Phase Optimization for Human-Robot Motion Retargeting based on Reinforcement Learning
The motion retargeting learning is performed using refined data in a latent space by the cyclic and filtering paths of our method.
Learning Character-Agnostic Motion for Motion Retargeting in 2D
In order to achieve our goal, we learn to extract, directly from a video, a high-level latent motion representation, which is invariant to the skeleton geometry and the camera view.
Task-Oriented Hand Motion Retargeting for Dexterous Manipulation Imitation
In this work, we capture the hand information by using a state-of-the-art hand pose estimator.
Skeleton-Aware Networks for Deep Motion Retargeting
In other words, our operators form the building blocks of a new deep motion processing framework that embeds the motion into a common latent space, shared by a collection of homeomorphic skeletons.
JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting
To alleviate this problem, we introduce JOKR - a JOint Keypoint Representation that captures the motion common to both the source and target videos, without requiring any object prior or data collection.
AgileGAN: stylizing portraits by inversion-consistent transfer learning
While substantial progress has been made in automated stylization, generating high quality stylistic portraits is still a challenge, and even the recent popular Toonify suffers from several artifacts when used on real input images.
ViA: View-invariant Skeleton Action Representation Learning via Motion Retargeting
Current self-supervised approaches for skeleton action representation learning often focus on constrained scenarios, where videos and skeleton data are recorded in laboratory settings.
Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis
The novel design of dual branches combines the strengths of deformation-grid-based transformation and warp-free generation for better identity preservation and robustness to occlusion in the synthesized videos.
Transfer4D: A Framework for Frugal Motion Capture and Deformation Transfer
Animating a virtual character based on a real performance of an actor is a challenging task that currently requires expensive motion capture setups and additional effort by expert animators, rendering it accessible only to large production houses.
Skinned Motion Retargeting with Residual Perception of Motion Semantics & Geometry
Driven by our explored distance-based losses that explicitly model the motion semantics and geometry, these two modules can learn residual motion modifications on the source motion to generate plausible retargeted motion in a single inference without post-processing.