no code implementations • 15 Nov 2023 • Badour AlBahar, Shunsuke Saito, Hung-Yu Tseng, Changil Kim, Johannes Kopf, Jia-Bin Huang
We present an approach to generate a 360-degree view of a person with a consistent, high-resolution appearance from a single input image.
no code implementations • 14 Nov 2023 • Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, Javier Romero
We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats.
no code implementations • 10 Nov 2023 • Jingfan Guo, Fabian Prada, Donglai Xiang, Javier Romero, Chenglei Wu, Hyun Soo Park, Takaaki Shiratori, Shunsuke Saito
Registering clothes from 4D scans with vertex-accurate correspondence is challenging, yet important for dynamic appearance modeling and physics parameter estimation from real-world data.
no code implementations • 30 Sep 2023 • Linjie Lyu, Ayush Tewari, Marc Habermann, Shunsuke Saito, Michael Zollhöfer, Thomas Leimkühler, Christian Theobalt
We further conduct an extensive comparative study of different priors on illumination used in previous work on inverse rendering.
no code implementations • 15 Jun 2023 • Shizhan Zhu, Shunsuke Saito, Aljaz Bozic, Carlos Aliaga, Trevor Darrell, Christop Lassner
Reconstructing and relighting objects and scenes under varying lighting conditions is challenging: existing neural rendering methods often cannot handle the complex interactions between materials and light.
1 code implementation • ICCV 2023 • Taeksoo Kim, Shunsuke Saito, Hanbyul Joo
Our compositional model is interaction-aware, meaning the spatial relationship between humans and objects, and the mutual shape change by physical contact is fully incorporated.
no code implementations • CVPR 2023 • Shun Iwase, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Timur Bagautdinov, Rohan Joshi, Fabian Prada, Takaaki Shiratori, Yaser Sheikh, Jason Saragih
To achieve generalization, we condition the student model with physics-inspired illumination features such as visibility, diffuse shading, and specular reflections computed on a coarse proxy geometry, maintaining a small computational overhead.
no code implementations • CVPR 2023 • Junxuan Li, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Hongdong Li, Jason Saragih
However, modeling the geometric and appearance interactions of glasses and the face of virtual representations of humans is challenging.
no code implementations • 28 Jul 2022 • Radu Alexandru Rosu, Shunsuke Saito, Ziyan Wang, Chenglei Wu, Sven Behnke, Giljoo Nam
Furthermore, we introduce a novel neural rendering framework based on rasterization of the learned hair strands.
no code implementations • 20 Jul 2022 • Edoardo Remelli, Timur Bagautdinov, Shunsuke Saito, Tomas Simon, Chenglei Wu, Shih-En Wei, Kaiwen Guo, Zhe Cao, Fabian Prada, Jason Saragih, Yaser Sheikh
To circumvent this, we propose a novel volumetric avatar representation by extending mixtures of volumetric primitives to articulated objects.
no code implementations • 30 Jun 2022 • Donglai Xiang, Timur Bagautdinov, Tuur Stuyck, Fabian Prada, Javier Romero, Weipeng Xu, Shunsuke Saito, Jingfan Guo, Breannan Smith, Takaaki Shiratori, Yaser Sheikh, Jessica Hodgins, Chenglei Wu
The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at training time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry.
1 code implementation • 10 May 2022 • Marko Mihajlovic, Aayush Bansal, Michael Zollhoefer, Siyu Tang, Shunsuke Saito
In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric humans from sparse views.
Ranked #1 on
Generalizable Novel View Synthesis
on ZJU-MoCap
1 code implementation • CVPR 2022 • Marko Mihajlovic, Shunsuke Saito, Aayush Bansal, Michael Zollhoefer, Siyu Tang
We present a novel neural implicit representation for articulated human bodies.
1 code implementation • 25 Mar 2022 • Ziqian Bai, Timur Bagautdinov, Javier Romero, Michael Zollhöfer, Ping Tan, Shunsuke Saito
In this work, for the first time, we enable autoregressive modeling of implicit avatars.
1 code implementation • 22 Nov 2021 • Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, Srinath Sridhar
Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time.
no code implementations • ICCV 2021 • Tong He, Yuanlu Xu, Shunsuke Saito, Stefano Soatto, Tony Tung
We present ARCH++, an image-based method to reconstruct 3D avatars with arbitrary clothing styles.
Ranked #1 on
3D Object Reconstruction From A Single Image
on RenderPeople
(using extra training data)
3D Object Reconstruction From A Single Image
Image-to-Image Translation
no code implementations • CVPR 2021 • Amit Raj, Michael Zollhofer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi
Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.
Ranked #4 on
Generalizable Novel View Synthesis
on ZJU-MoCap
1 code implementation • CVPR 2021 • Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
We demonstrate the efficacy of our surface representation by learning models of complex clothing from point clouds.
2 code implementations • CVPR 2021 • Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black
We present SCANimate, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
no code implementations • 7 Jan 2021 • Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi
Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.
1 code implementation • ECCV 2020 • Ruilong Li, Yuliang Xiu, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li
We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video, eliminating the need for expensive multi-view systems or cumbersome pre-acquisition of a personalized template model.
3 code implementations • CVPR 2020 • Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo
Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images.
Ranked #1 on
3D Object Reconstruction From A Single Image
on BUFF
no code implementations • NeurIPS 2019 • Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li
The representation of 3D surfaces itself is a key factor for the quality and resolution of the 3D output.
1 code implementation • ICCV 2019 • Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, Hao Li
We introduce Pixel-aligned Implicit Function (PIFu), a highly effective implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object.
Ranked #1 on
3D Object Reconstruction
on RenderPeople
1 code implementation • CVPR 2019 • Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima
The synthesized silhouettes which are the most consistent with the input segmentation are fed into a deep visual hull algorithm for robust 3D shape prediction.
no code implementations • CVPR 2018 • Loc Huynh, Weikai Chen, Shunsuke Saito, Jun Xing, Koki Nagano, Andrew Jones, Paul Debevec, Hao Li
We present a learning-based approach for synthesizing facial geometry at medium and fine scales from diffusely-lit facial texture maps.
no code implementations • ICCV 2017 • Kyle Olszewski, Zimo Li, Chao Yang, Yi Zhou, Ronald Yu, Zeng Huang, Sitao Xiang, Shunsuke Saito, Pushmeet Kohli, Hao Li
By retargeting the PCA expression geometry from the source, as well as using the newly inferred texture, we can both animate the face and perform video face replacement on the source video using the target appearance.
no code implementations • ICCV 2017 • Ronald Yu, Shunsuke Saito, Haoxiang Li, Duygu Ceylan, Hao Li
To train such a network, we generate a massive dataset of synthetic faces with dense labels using renderings of a morphable face model with variations in pose, expressions, lighting, and occlusions.
1 code implementation • CVPR 2017 • Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, Hao Li
We present a data-driven inference method that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild.
1 code implementation • 21 Sep 2016 • Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, Jaakko Lehtinen
We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video.
no code implementations • 10 Apr 2016 • Shunsuke Saito, Tianye Li, Hao Li
We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views.