no code implementations • 10 May 2022 • Marko Mihajlovic, Aayush Bansal, Michael Zollhoefer, Siyu Tang, Shunsuke Saito
In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric avatars from sparse views.
no code implementations • 13 Apr 2022 • Marko Mihajlovic, Shunsuke Saito, Aayush Bansal, Michael Zollhoefer, Siyu Tang
We present a novel neural implicit representation for articulated human bodies.
no code implementations • 25 Mar 2022 • Ziqian Bai, Timur Bagautdinov, Javier Romero, Michael Zollhöfer, Ping Tan, Shunsuke Saito
In this work, for the first time, we enable autoregressive modeling of implicit avatars.
no code implementations • 22 Nov 2021 • Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, Srinath Sridhar
Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time.
no code implementations • ICCV 2021 • Tong He, Yuanlu Xu, Shunsuke Saito, Stefano Soatto, Tony Tung
We present ARCH++, an image-based method to reconstruct 3D avatars with arbitrary clothing styles.
3D Object Reconstruction From A Single Image
Image-to-Image Translation
no code implementations • CVPR 2021 • Amit Raj, Michael Zollhofer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi
Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.
1 code implementation • CVPR 2021 • Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
We demonstrate the efficacy of our surface representation by learning models of complex clothing from point clouds.
no code implementations • CVPR 2021 • Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black
We present SCANimate, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
no code implementations • 7 Jan 2021 • Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi
Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.
1 code implementation • ECCV 2020 • Ruilong Li, Yuliang Xiu, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li
We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video, eliminating the need for expensive multi-view systems or cumbersome pre-acquisition of a personalized template model.
3 code implementations • CVPR 2020 • Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo
Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images.
Ranked #1 on
3D Object Reconstruction From A Single Image
on BUFF
no code implementations • NeurIPS 2019 • Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li
The representation of 3D surfaces itself is a key factor for the quality and resolution of the 3D output.
1 code implementation • ICCV 2019 • Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, Hao Li
We introduce Pixel-aligned Implicit Function (PIFu), a highly effective implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object.
no code implementations • CVPR 2019 • Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima
The synthesized silhouettes which are the most consistent with the input segmentation are fed into a deep visual hull algorithm for robust 3D shape prediction.
no code implementations • CVPR 2018 • Loc Huynh, Weikai Chen, Shunsuke Saito, Jun Xing, Koki Nagano, Andrew Jones, Paul Debevec, Hao Li
We present a learning-based approach for synthesizing facial geometry at medium and fine scales from diffusely-lit facial texture maps.
no code implementations • ICCV 2017 • Kyle Olszewski, Zimo Li, Chao Yang, Yi Zhou, Ronald Yu, Zeng Huang, Sitao Xiang, Shunsuke Saito, Pushmeet Kohli, Hao Li
By retargeting the PCA expression geometry from the source, as well as using the newly inferred texture, we can both animate the face and perform video face replacement on the source video using the target appearance.
no code implementations • ICCV 2017 • Ronald Yu, Shunsuke Saito, Haoxiang Li, Duygu Ceylan, Hao Li
To train such a network, we generate a massive dataset of synthetic faces with dense labels using renderings of a morphable face model with variations in pose, expressions, lighting, and occlusions.
1 code implementation • CVPR 2017 • Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, Hao Li
We present a data-driven inference method that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild.
1 code implementation • 21 Sep 2016 • Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, Jaakko Lehtinen
We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video.
no code implementations • 10 Apr 2016 • Shunsuke Saito, Tianye Li, Hao Li
We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views.