motion retargeting

15 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Latest papers with no code

Video-driven Neural Physically-based Facial Asset for Production

no code yet • 11 Feb 2022

In this paper, we present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets.

MoCaNet: Motion Retargeting in-the-wild via Canonicalization Networks

no code yet • 19 Dec 2021

Trained with the canonicalization operations and the derived regularizations, our method learns to factorize a skeleton sequence into three independent semantic subspaces, i. e., motion, structure, and view angle.

Hierarchical Neural Implicit Pose Network for Animation and Motion Retargeting

no code yet • 2 Dec 2021

We present HIPNet, a neural implicit pose network trained on multiple subjects across many poses.

LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human Bodies

no code yet • 30 Nov 2021

In this work, we propose a novel neural implicit representation for the human body, which is fully differentiable and optimizable with disentangled shape and pose latent spaces.

Dance In the Wild: Monocular Human Animation with Neural Dynamic Appearance Synthesis

no code yet • 10 Nov 2021

Synthesizing dynamic appearances of humans in motion plays a central role in applications such as AR/VR and video editing.

Contact-Aware Retargeting of Skinned Motion

no code yet • ICCV 2021

Self-contacts, such as when hands touch each other or the torso or the head, are important attributes of human body language and dynamics, yet existing methods do not model or preserve these contacts.

Flow Guided Transformable Bottleneck Networks for Motion Retargeting

no code yet • CVPR 2021

Human motion retargeting aims to transfer the motion of one person in a "driving" video or set of images to another person.

Self-Supervised Motion Retargeting with Safety Guarantee

no code yet • 11 Mar 2021

In this paper, we present self-supervised shared latent embedding (S3LE), a data-driven motion retargeting method that enables the generation of natural motions in humanoid robots from motion capture data or RGB videos.

Recovering and Simulating Pedestrians in the Wild

no code yet • 16 Nov 2020

We then incorporate the reconstructed pedestrian assets bank in a realistic LiDAR simulation system by performing motion retargeting, and show that the simulated LiDAR data can be used to significantly reduce the amount of annotated real-world data required for visual perception tasks.

Personalized Face Modeling for Improved Face Reconstruction and Motion Retargeting

no code yet • ECCV 2020

Traditional methods for image-based 3D face reconstruction and facial motion retargeting fit a 3D morphable model (3DMM) to the face, which has limited modeling capacity and fail to generalize well to in-the-wild data.