Search Results for author: Takaaki Shiratori

Found 15 papers, 6 papers with code

Authentic Hand Avatar from a Phone Scan via Universal Hand Model

no code implementations CVPR 2024 Gyeongsik Moon, Weipeng Xu, Rohan Joshi, Chenglei Wu, Takaaki Shiratori

In this paper, we present a universal hand model (UHM), which 1) can universally represent high-fidelity 3D hand meshes of arbitrary identities (IDs) and 2) can be adapted to each person with a short phone scan for the authentic hand avatar.

Diffusion Shape Prior for Wrinkle-Accurate Cloth Registration

no code implementations10 Nov 2023 Jingfan Guo, Fabian Prada, Donglai Xiang, Javier Romero, Chenglei Wu, Hyun Soo Park, Takaaki Shiratori, Shunsuke Saito

Registering clothes from 4D scans with vertex-accurate correspondence is challenging, yet important for dynamic appearance modeling and physics parameter estimation from real-world data.

BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer

no code implementations7 Sep 2023 Kunkun Pang, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi, Taku Komura

Learning the mapping between speech and 3D full-body gestures is difficult due to the stochastic nature of the problem and the lack of a rich cross-modal dataset that is needed for training.

RelightableHands: Efficient Neural Relighting of Articulated Hand Models

no code implementations CVPR 2023 Shun Iwase, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Timur Bagautdinov, Rohan Joshi, Fabian Prada, Takaaki Shiratori, Yaser Sheikh, Jason Saragih

To achieve generalization, we condition the student model with physics-inspired illumination features such as visibility, diffuse shading, and specular reflections computed on a coarse proxy geometry, maintaining a small computational overhead.

3D Clothed Human Reconstruction in the Wild

1 code implementation20 Jul 2022 Gyeongsik Moon, Hyeongjin Nam, Takaaki Shiratori, Kyoung Mu Lee

Although much progress has been made in 3D clothed human reconstruction, most of the existing methods fail to produce robust results from in-the-wild images, which contain diverse human poses and appearances.

Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing

no code implementations30 Jun 2022 Donglai Xiang, Timur Bagautdinov, Tuur Stuyck, Fabian Prada, Javier Romero, Weipeng Xu, Shunsuke Saito, Jingfan Guo, Breannan Smith, Takaaki Shiratori, Yaser Sheikh, Jessica Hodgins, Chenglei Wu

The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at training time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry.

FrankMocap: A Monocular 3D Whole-Body Pose Estimation System via Regression and Integration

1 code implementation13 Aug 2021 Yu Rong, Takaaki Shiratori, Hanbyul Joo

Most existing monocular 3D pose estimation approaches only focus on a single body part, neglecting the fact that the essential nuance of human motion is conveyed through a concert of subtle movements of face, hands, and body.

3D Human Pose Estimation 3D Human Reconstruction +2

Driving-Signal Aware Full-Body Avatars

no code implementations21 May 2021 Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabian Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, Jason Saragih

The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation.


FrankMocap: Fast Monocular 3D Hand and Body Motion Capture by Regression and Integration

1 code implementation19 Aug 2020 Yu Rong, Takaaki Shiratori, Hanbyul Joo

To construct FrankMocap, we build the state-of-the-art monocular 3D "hand" motion capture method by taking the hand part of the whole body parametric model (SMPL-X).

3D Hand Pose Estimation 3D Human Reconstruction +2

DeepHandMesh: A Weakly-supervised Deep Encoder-Decoder Framework for High-fidelity Hand Mesh Modeling

1 code implementation ECCV 2020 Gyeongsik Moon, Takaaki Shiratori, Kyoung Mu Lee

We design our system to be trained in an end-to-end and weakly-supervised manner; therefore, it does not require groundtruth meshes.


Self-Supervised Adaptation of High-Fidelity Face Models for Monocular Performance Tracking

no code implementations CVPR 2019 Jae Shin Yoon, Takaaki Shiratori, Shoou-I Yu, Hyun Soo Park

In this paper, we propose a self-supervised domain adaptation approach to enable the animation of high-fidelity face models from a commodity camera.

Domain Adaptation Face Model

Cannot find the paper you are looking for? You can Submit a new open access paper.