Search Results for author: Takaaki Shiratori

Found 7 papers, 4 papers with code

FrankMocap: A Monocular 3D Whole-Body Pose Estimation System via Regression and Integration

1 code implementation13 Aug 2021 Yu Rong, Takaaki Shiratori, Hanbyul Joo

Most existing monocular 3D pose estimation approaches only focus on a single body part, neglecting the fact that the essential nuance of human motion is conveyed through a concert of subtle movements of face, hands, and body.

3D Human Reconstruction 3D Pose Estimation

Driving-Signal Aware Full-Body Avatars

no code implementations21 May 2021 Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabian Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, Jason Saragih

The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation.

Imputation

InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image

2 code implementations ECCV 2020 Gyeongsik Moon, Shoou-I Yu, He Wen, Takaaki Shiratori, Kyoung Mu Lee

Therefore, we firstly propose (1) a large-scale dataset, InterHand2. 6M, and (2) a baseline network, InterNet, for 3D interacting hand pose estimation from a single RGB image.

3D Hand Pose Estimation

FrankMocap: Fast Monocular 3D Hand and Body Motion Capture by Regression and Integration

1 code implementation19 Aug 2020 Yu Rong, Takaaki Shiratori, Hanbyul Joo

To construct FrankMocap, we build the state-of-the-art monocular 3D "hand" motion capture method by taking the hand part of the whole body parametric model (SMPL-X).

3D Hand Pose Estimation 3D Human Reconstruction +1

DeepHandMesh: A Weakly-supervised Deep Encoder-Decoder Framework for High-fidelity Hand Mesh Modeling

1 code implementation ECCV 2020 Gyeongsik Moon, Takaaki Shiratori, Kyoung Mu Lee

We design our system to be trained in an end-to-end and weakly-supervised manner; therefore, it does not require groundtruth meshes.

Self-Supervised Adaptation of High-Fidelity Face Models for Monocular Performance Tracking

no code implementations CVPR 2019 Jae Shin Yoon, Takaaki Shiratori, Shoou-I Yu, Hyun Soo Park

In this paper, we propose a self-supervised domain adaptation approach to enable the animation of high-fidelity face models from a commodity camera.

Domain Adaptation Face Model +1

Cannot find the paper you are looking for? You can Submit a new open access paper.