Search Results for author: Shunsuke Saito

Found 20 papers, 6 papers with code

KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints

no code implementations10 May 2022 Marko Mihajlovic, Aayush Bansal, Michael Zollhoefer, Siyu Tang, Shunsuke Saito

In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric avatars from sparse views.

COAP: Compositional Articulated Occupancy of People

no code implementations13 Apr 2022 Marko Mihajlovic, Shunsuke Saito, Aayush Bansal, Michael Zollhoefer, Siyu Tang

We present a novel neural implicit representation for articulated human bodies.

Neural Fields in Visual Computing and Beyond

no code implementations22 Nov 2021 Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, Srinath Sridhar

Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time.

3D Reconstruction Image Animation +1

Pixel-Aligned Volumetric Avatars

no code implementations CVPR 2021 Amit Raj, Michael Zollhofer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi

Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.

SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks

no code implementations CVPR 2021 Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black

We present SCANimate, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.

PVA: Pixel-aligned Volumetric Avatars

no code implementations7 Jan 2021 Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi

Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.

Monocular Real-Time Volumetric Performance Capture

1 code implementation ECCV 2020 Ruilong Li, Yuliang Xiu, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video, eliminating the need for expensive multi-view systems or cumbersome pre-acquisition of a personalized template model.

3D Human Shape Estimation

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

3 code implementations CVPR 2020 Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images.

3D Human Pose Estimation 3D Human Reconstruction +3

Learning to Infer Implicit Surfaces without 3D Supervision

no code implementations NeurIPS 2019 Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li

The representation of 3D surfaces itself is a key factor for the quality and resolution of the 3D output.

3D Shape Generation

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization

1 code implementation ICCV 2019 Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, Hao Li

We introduce Pixel-aligned Implicit Function (PIFu), a highly effective implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object.

3D Human Pose Estimation 3D Human Reconstruction +2

SiCloPe: Silhouette-Based Clothed People

no code implementations CVPR 2019 Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima

The synthesized silhouettes which are the most consistent with the input segmentation are fed into a deep visual hull algorithm for robust 3D shape prediction.

Image-to-Image Translation

Realistic Dynamic Facial Textures From a Single Image Using GANs

no code implementations ICCV 2017 Kyle Olszewski, Zimo Li, Chao Yang, Yi Zhou, Ronald Yu, Zeng Huang, Sitao Xiang, Shunsuke Saito, Pushmeet Kohli, Hao Li

By retargeting the PCA expression geometry from the source, as well as using the newly inferred texture, we can both animate the face and perform video face replacement on the source video using the target appearance.

Frame

Learning Dense Facial Correspondences in Unconstrained Images

no code implementations ICCV 2017 Ronald Yu, Shunsuke Saito, Haoxiang Li, Duygu Ceylan, Hao Li

To train such a network, we generate a massive dataset of synthetic faces with dense labels using renderings of a morphable face model with variations in pose, expressions, lighting, and occlusions.

Face Alignment Face Model +1

Photorealistic Facial Texture Inference Using Deep Neural Networks

1 code implementation CVPR 2017 Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, Hao Li

We present a data-driven inference method that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild.

Face Model

Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks

1 code implementation21 Sep 2016 Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, Jaakko Lehtinen

We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video.

Real-Time Facial Segmentation and Performance Capture from RGB Input

no code implementations10 Apr 2016 Shunsuke Saito, Tianye Li, Hao Li

We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views.

Data Augmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.