Search Results for author: Jason Saragih

Found 18 papers, 3 papers with code

LiP-Flow: Learning Inference-time Priors for Codec Avatars via Normalizing Flows in Latent Space

no code implementations15 Mar 2022 Emre Aksan, Shugao Ma, Akin Caliskan, Stanislav Pidhorskyi, Alexander Richard, Shih-En Wei, Jason Saragih, Otmar Hilliges

To mitigate this asymmetry, we introduce a prior model that is conditioned on the runtime inputs and tie this prior space to the 3D face model via a normalizing flow in the latent space.

Face Model

Pixel-Aligned Volumetric Avatars

no code implementations CVPR 2021 Amit Raj, Michael Zollhofer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi

Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.

Driving-Signal Aware Full-Body Avatars

no code implementations21 May 2021 Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabian Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, Jason Saragih

The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation.

Imputation

Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality

no code implementations10 Apr 2021 Amin Jourabloo, Fernando de la Torre, Jason Saragih, Shih-En Wei, Te-Li Wang, Stephen Lombardi, Danielle Belko, Autumn Trimble, Hernan Badino

Social presence, the feeling of being there with a real person, will fuel the next generation of communication systems driven by digital humans in virtual reality (VR).

Pixel Codec Avatars

no code implementations CVPR 2021 Shugao Ma, Tomas Simon, Jason Saragih, Dawei Wang, Yuecheng Li, Fernando de la Torre, Yaser Sheikh

Telecommunication with photorealistic avatars in virtual or augmented reality is a promising path for achieving authentic face-to-face communication in 3D over remote physical distances.

SimPoE: Simulated Character Control for 3D Human Pose Estimation

no code implementations CVPR 2021 Ye Yuan, Shih-En Wei, Tomas Simon, Kris Kitani, Jason Saragih

Based on this refined kinematic pose, the policy learns to compute dynamics-based control (e. g., joint torques) of the character to advance the current-frame pose estimate to the pose estimate of the next frame.

3D Human Pose Estimation Frame

High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation

no code implementations CVPR 2021 Lele Chen, Chen Cao, Fernando de la Torre, Jason Saragih, Chenliang Xu, Yaser Sheikh

This paper addresses previous limitations by learning a deep learning lighting model, that in combination with a high-quality 3D face tracking algorithm, provides a method for subtle and robust facial motion transfer from a regular video to a 3D photo-realistic avatar.

Mixture of Volumetric Primitives for Efficient Neural Rendering

no code implementations2 Mar 2021 Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, Jason Saragih

Real-time rendering and animation of humans is a core function in games, movies, and telepresence applications.

Neural Rendering

PVA: Pixel-aligned Volumetric Avatars

no code implementations7 Jan 2021 Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi

Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.

State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

Image Generation Neural Rendering +1

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

3 code implementations CVPR 2020 Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images.

3D Human Pose Estimation 3D Human Reconstruction +3

Neural Volumes: Learning Dynamic Renderable Volumes from Images

1 code implementation18 Jun 2019 Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, Yaser Sheikh

Modeling and rendering of dynamic scenes is challenging, as natural scenes often contain complex phenomena such as thin structures, evolving topology, translucency, scattering, occlusion, and biological motion.

LBS Autoencoder: Self-supervised Fitting of Articulated Meshes to Point Clouds

no code implementations CVPR 2019 Chun-Liang Li, Tomas Simon, Jason Saragih, Barnabás Póczos, Yaser Sheikh

As input, we take a sequence of point clouds to be registered as well as an artist-rigged mesh, i. e. a template mesh equipped with a linear-blend skinning (LBS) deformation space parameterized by a skeleton hierarchy.

Image Disentanglement and Uncooperative Re-Entanglement for High-Fidelity Image-to-Image Translation

no code implementations11 Jan 2019 Adam W. Harley, Shih-En Wei, Jason Saragih, Katerina Fragkiadaki

Cross-domain image-to-image translation should satisfy two requirements: (1) preserve the information that is common to both domains, and (2) generate convincing images covering variations that appear in the target domain.

Disentanglement Image-to-Image Translation +1

Deep Appearance Models for Face Rendering

no code implementations1 Aug 2018 Stephen Lombardi, Jason Saragih, Tomas Simon, Yaser Sheikh

At inference time, we condition the decoding network on the viewpoint of the camera in order to generate the appropriate texture for rendering.

Modeling Facial Geometry Using Compositional VAEs

no code implementations CVPR 2018 Timur Bagautdinov, Chenglei Wu, Jason Saragih, Pascal Fua, Yaser Sheikh

We propose a method for learning non-linear face geometry representations using deep generative models.

Cannot find the paper you are looking for? You can Submit a new open access paper.