Search Results for author: Shunsuke Saito

Found 35 papers, 14 papers with code

InterHandGen: Two-Hand Interaction Generation via Cascaded Reverse Diffusion

no code implementations26 Mar 2024 Jihyun Lee, Shunsuke Saito, Giljoo Nam, Minhyuk Sung, Tae-Kyun Kim

Sampling from our model yields plausible and diverse two-hand shapes in close interaction with or without an object.

GALA: Generating Animatable Layered Assets from a Single Scan

no code implementations23 Jan 2024 Taeksoo Kim, Byungjun Kim, Shunsuke Saito, Hanbyul Joo

Through a series of decomposition steps, we obtain multiple layers of 3D assets in a shared canonical space normalized in terms of poses and human shapes, hence supporting effortless composition to novel identities and reanimation with novel poses.

General Knowledge

URHand: Universal Relightable Hands

no code implementations10 Jan 2024 Zhaoxi Chen, Gyeongsik Moon, Kaiwen Guo, Chen Cao, Stanislav Pidhorskyi, Tomas Simon, Rohan Joshi, Yuan Dong, Yichen Xu, Bernardo Pires, He Wen, Lucas Evans, Bo Peng, Julia Buffalini, Autumn Trimble, Kevyn McPhail, Melissa Schoeller, Shoou-I Yu, Javier Romero, Michael Zollhöfer, Yaser Sheikh, Ziwei Liu, Shunsuke Saito

To simplify the personalization process while retaining photorealism, we build a powerful universal relightable prior based on neural relighting from multi-view images of hands captured in a light stage with hundreds of identities.

Relightable Gaussian Codec Avatars

no code implementations6 Dec 2023 Shunsuke Saito, Gabriel Schwartz, Tomas Simon, Junxuan Li, Giljoo Nam

The fidelity of relighting is bounded by both geometry and appearance representations.

Single-Image 3D Human Digitization with Shape-Guided Diffusion

no code implementations15 Nov 2023 Badour AlBahar, Shunsuke Saito, Hung-Yu Tseng, Changil Kim, Johannes Kopf, Jia-Bin Huang

We present an approach to generate a 360-degree view of a person with a consistent, high-resolution appearance from a single input image.

Image Generation Inverse Rendering

Drivable 3D Gaussian Avatars

no code implementations14 Nov 2023 Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, Javier Romero

We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats.

Diffusion Shape Prior for Wrinkle-Accurate Cloth Registration

no code implementations10 Nov 2023 Jingfan Guo, Fabian Prada, Donglai Xiang, Javier Romero, Chenglei Wu, Hyun Soo Park, Takaaki Shiratori, Shunsuke Saito

Registering clothes from 4D scans with vertex-accurate correspondence is challenging, yet important for dynamic appearance modeling and physics parameter estimation from real-world data.

Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering

1 code implementation30 Sep 2023 Linjie Lyu, Ayush Tewari, Marc Habermann, Shunsuke Saito, Michael Zollhöfer, Thomas Leimkühler, Christian Theobalt

We further conduct an extensive comparative study of different priors on illumination used in previous work on inverse rendering.

Denoising Inverse Rendering

Neural Relighting with Subsurface Scattering by Learning the Radiance Transfer Gradient

no code implementations15 Jun 2023 Shizhan Zhu, Shunsuke Saito, Aljaz Bozic, Carlos Aliaga, Trevor Darrell, Christop Lassner

Reconstructing and relighting objects and scenes under varying lighting conditions is challenging: existing neural rendering methods often cannot handle the complex interactions between materials and light.

Neural Rendering

NCHO: Unsupervised Learning for Neural 3D Composition of Humans and Objects

1 code implementation ICCV 2023 Taeksoo Kim, Shunsuke Saito, Hanbyul Joo

Our compositional model is interaction-aware, meaning the spatial relationship between humans and objects, and the mutual shape change by physical contact is fully incorporated.

RelightableHands: Efficient Neural Relighting of Articulated Hand Models

no code implementations CVPR 2023 Shun Iwase, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Timur Bagautdinov, Rohan Joshi, Fabian Prada, Takaaki Shiratori, Yaser Sheikh, Jason Saragih

To achieve generalization, we condition the student model with physics-inspired illumination features such as visibility, diffuse shading, and specular reflections computed on a coarse proxy geometry, maintaining a small computational overhead.

MEGANE: Morphable Eyeglass and Avatar Network

no code implementations CVPR 2023 Junxuan Li, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Hongdong Li, Jason Saragih

However, modeling the geometric and appearance interactions of glasses and the face of virtual representations of humans is challenging.

Image Generation Inverse Rendering

Neural Strands: Learning Hair Geometry and Appearance from Multi-View Images

no code implementations28 Jul 2022 Radu Alexandru Rosu, Shunsuke Saito, Ziyan Wang, Chenglei Wu, Sven Behnke, Giljoo Nam

Furthermore, we introduce a novel neural rendering framework based on rasterization of the learned hair strands.

Neural Rendering

Drivable Volumetric Avatars using Texel-Aligned Features

no code implementations20 Jul 2022 Edoardo Remelli, Timur Bagautdinov, Shunsuke Saito, Tomas Simon, Chenglei Wu, Shih-En Wei, Kaiwen Guo, Zhe Cao, Fabian Prada, Jason Saragih, Yaser Sheikh

To circumvent this, we propose a novel volumetric avatar representation by extending mixtures of volumetric primitives to articulated objects.

Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing

no code implementations30 Jun 2022 Donglai Xiang, Timur Bagautdinov, Tuur Stuyck, Fabian Prada, Javier Romero, Weipeng Xu, Shunsuke Saito, Jingfan Guo, Breannan Smith, Takaaki Shiratori, Yaser Sheikh, Jessica Hodgins, Chenglei Wu

The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at training time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry.

KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints

1 code implementation10 May 2022 Marko Mihajlovic, Aayush Bansal, Michael Zollhoefer, Siyu Tang, Shunsuke Saito

In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric humans from sparse views.

3D Face Reconstruction 3D Human Reconstruction +2

Neural Fields in Visual Computing and Beyond

1 code implementation22 Nov 2021 Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, Srinath Sridhar

Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time.

3D Reconstruction Image Animation +1

Pixel-Aligned Volumetric Avatars

no code implementations CVPR 2021 Amit Raj, Michael Zollhofer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi

Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.

Generalizable Novel View Synthesis

SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks

2 code implementations CVPR 2021 Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black

We present SCANimate, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.

Weakly-supervised Learning

PVA: Pixel-aligned Volumetric Avatars

no code implementations7 Jan 2021 Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi

Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.

Monocular Real-Time Volumetric Performance Capture

1 code implementation ECCV 2020 Ruilong Li, Yuliang Xiu, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video, eliminating the need for expensive multi-view systems or cumbersome pre-acquisition of a personalized template model.

3D Human Shape Estimation

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

3 code implementations CVPR 2020 Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images.

3D Human Pose Estimation 3D Human Reconstruction +3

Learning to Infer Implicit Surfaces without 3D Supervision

no code implementations NeurIPS 2019 Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li

The representation of 3D surfaces itself is a key factor for the quality and resolution of the 3D output.

3D Shape Generation

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization

1 code implementation ICCV 2019 Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, Hao Li

We introduce Pixel-aligned Implicit Function (PIFu), a highly effective implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object.

3D Human Pose Estimation 3D Human Reconstruction +3

SiCloPe: Silhouette-Based Clothed People

1 code implementation CVPR 2019 Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima

The synthesized silhouettes which are the most consistent with the input segmentation are fed into a deep visual hull algorithm for robust 3D shape prediction.

Generative Adversarial Network Image-to-Image Translation

Realistic Dynamic Facial Textures From a Single Image Using GANs

no code implementations ICCV 2017 Kyle Olszewski, Zimo Li, Chao Yang, Yi Zhou, Ronald Yu, Zeng Huang, Sitao Xiang, Shunsuke Saito, Pushmeet Kohli, Hao Li

By retargeting the PCA expression geometry from the source, as well as using the newly inferred texture, we can both animate the face and perform video face replacement on the source video using the target appearance.

Learning Dense Facial Correspondences in Unconstrained Images

no code implementations ICCV 2017 Ronald Yu, Shunsuke Saito, Haoxiang Li, Duygu Ceylan, Hao Li

To train such a network, we generate a massive dataset of synthetic faces with dense labels using renderings of a morphable face model with variations in pose, expressions, lighting, and occlusions.

Face Alignment Face Model

Photorealistic Facial Texture Inference Using Deep Neural Networks

1 code implementation CVPR 2017 Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, Hao Li

We present a data-driven inference method that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild.

Face Model

Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks

1 code implementation21 Sep 2016 Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, Jaakko Lehtinen

We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video.

Real-Time Facial Segmentation and Performance Capture from RGB Input

no code implementations10 Apr 2016 Shunsuke Saito, Tianye Li, Hao Li

We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views.

Data Augmentation Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.