Search Results for author: Jason Saragih

Found 26 papers, 8 papers with code

Modeling Facial Geometry Using Compositional VAEs

no code implementations CVPR 2018 Timur Bagautdinov, Chenglei Wu, Jason Saragih, Pascal Fua, Yaser Sheikh

We propose a method for learning non-linear face geometry representations using deep generative models.

Deep Appearance Models for Face Rendering

1 code implementation1 Aug 2018 Stephen Lombardi, Jason Saragih, Tomas Simon, Yaser Sheikh

At inference time, we condition the decoding network on the viewpoint of the camera in order to generate the appropriate texture for rendering.

Image Disentanglement and Uncooperative Re-Entanglement for High-Fidelity Image-to-Image Translation

no code implementations11 Jan 2019 Adam W. Harley, Shih-En Wei, Jason Saragih, Katerina Fragkiadaki

Cross-domain image-to-image translation should satisfy two requirements: (1) preserve the information that is common to both domains, and (2) generate convincing images covering variations that appear in the target domain.

Disentanglement Image-to-Image Translation +1

LBS Autoencoder: Self-supervised Fitting of Articulated Meshes to Point Clouds

no code implementations CVPR 2019 Chun-Liang Li, Tomas Simon, Jason Saragih, Barnabás Póczos, Yaser Sheikh

As input, we take a sequence of point clouds to be registered as well as an artist-rigged mesh, i. e. a template mesh equipped with a linear-blend skinning (LBS) deformation space parameterized by a skeleton hierarchy.

Neural Volumes: Learning Dynamic Renderable Volumes from Images

1 code implementation18 Jun 2019 Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, Yaser Sheikh

Modeling and rendering of dynamic scenes is challenging, as natural scenes often contain complex phenomena such as thin structures, evolving topology, translucency, scattering, occlusion, and biological motion.

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

3 code implementations CVPR 2020 Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images.

3D Human Pose Estimation 3D Human Reconstruction +3

State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

BIG-bench Machine Learning Image Generation +2

Learning Compositional Radiance Fields of Dynamic Human Heads

1 code implementation CVPR 2021 Ziyan Wang, Timur Bagautdinov, Stephen Lombardi, Tomas Simon, Jason Saragih, Jessica Hodgins, Michael Zollhöfer

In addition, we show that the learned dynamic radiance field can be used to synthesize novel unseen expressions based on a global animation code.

Neural Rendering Synthetic Data Generation

PVA: Pixel-aligned Volumetric Avatars

no code implementations7 Jan 2021 Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi

Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.

Mixture of Volumetric Primitives for Efficient Neural Rendering

1 code implementation2 Mar 2021 Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, Jason Saragih

Real-time rendering and animation of humans is a core function in games, movies, and telepresence applications.

Neural Rendering

High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation

no code implementations CVPR 2021 Lele Chen, Chen Cao, Fernando de la Torre, Jason Saragih, Chenliang Xu, Yaser Sheikh

This paper addresses previous limitations by learning a deep learning lighting model, that in combination with a high-quality 3D face tracking algorithm, provides a method for subtle and robust facial motion transfer from a regular video to a 3D photo-realistic avatar.

Vocal Bursts Intensity Prediction

SimPoE: Simulated Character Control for 3D Human Pose Estimation

no code implementations CVPR 2021 Ye Yuan, Shih-En Wei, Tomas Simon, Kris Kitani, Jason Saragih

Based on this refined kinematic pose, the policy learns to compute dynamics-based control (e. g., joint torques) of the character to advance the current-frame pose estimate to the pose estimate of the next frame.

3D Human Pose Estimation

Pixel Codec Avatars

1 code implementation CVPR 2021 Shugao Ma, Tomas Simon, Jason Saragih, Dawei Wang, Yuecheng Li, Fernando de la Torre, Yaser Sheikh

Telecommunication with photorealistic avatars in virtual or augmented reality is a promising path for achieving authentic face-to-face communication in 3D over remote physical distances.

Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality

no code implementations CVPR 2022 Amin Jourabloo, Baris Gecer, Fernando de la Torre, Jason Saragih, Shih-En Wei, Te-Li Wang, Stephen Lombardi, Danielle Belko, Autumn Trimble, Hernan Badino

Social presence, the feeling of being there with a real person, will fuel the next generation of communication systems driven by digital humans in virtual reality (VR).

Driving-Signal Aware Full-Body Avatars

no code implementations21 May 2021 Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabian Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, Jason Saragih

The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation.

Imputation

Pixel-Aligned Volumetric Avatars

no code implementations CVPR 2021 Amit Raj, Michael Zollhofer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi

Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.

Generalizable Novel View Synthesis

LiP-Flow: Learning Inference-time Priors for Codec Avatars via Normalizing Flows in Latent Space

no code implementations15 Mar 2022 Emre Aksan, Shugao Ma, Akin Caliskan, Stanislav Pidhorskyi, Alexander Richard, Shih-En Wei, Jason Saragih, Otmar Hilliges

To mitigate this asymmetry, we introduce a prior model that is conditioned on the runtime inputs and tie this prior space to the 3D face model via a normalizing flow in the latent space.

Face Model

Drivable Volumetric Avatars using Texel-Aligned Features

no code implementations20 Jul 2022 Edoardo Remelli, Timur Bagautdinov, Shunsuke Saito, Tomas Simon, Chenglei Wu, Shih-En Wei, Kaiwen Guo, Zhe Cao, Fabian Prada, Jason Saragih, Yaser Sheikh

To circumvent this, we propose a novel volumetric avatar representation by extending mixtures of volumetric primitives to articulated objects.

MEGANE: Morphable Eyeglass and Avatar Network

no code implementations CVPR 2023 Junxuan Li, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Hongdong Li, Jason Saragih

However, modeling the geometric and appearance interactions of glasses and the face of virtual representations of humans is challenging.

Image Generation Inverse Rendering

RelightableHands: Efficient Neural Relighting of Articulated Hand Models

no code implementations CVPR 2023 Shun Iwase, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Timur Bagautdinov, Rohan Joshi, Fabian Prada, Takaaki Shiratori, Yaser Sheikh, Jason Saragih

To achieve generalization, we condition the student model with physics-inspired illumination features such as visibility, diffuse shading, and specular reflections computed on a coarse proxy geometry, maintaining a small computational overhead.

Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence

no code implementations CVPR 2023 Yonggan Fu, Yuecheng Li, Chenghui Li, Jason Saragih, Peizhao Zhang, Xiaoliang Dai, Yingyan Lin

Real-time and robust photorealistic avatars for telepresence in AR/VR have been highly desired for enabling immersive photorealistic telepresence.

Neural Architecture Search

A Local Appearance Model for Volumetric Capture of Diverse Hairstyle

no code implementations14 Dec 2023 Ziyan Wang, Giljoo Nam, Aljaz Bozic, Chen Cao, Jason Saragih, Michael Zollhoefer, Jessica Hodgins

In this paper, we present a novel method for creating high-fidelity avatars with diverse hairstyles.

Fast Registration of Photorealistic Avatars for VR Facial Animation

no code implementations19 Jan 2024 Chaitanya Patel, Shaojie Bai, Te-Li Wang, Jason Saragih, Shih-En Wei

In this work, we first show that the domain gap between the avatar and headset-camera images is one of the primary sources of difficulty, where a transformer-based architecture achieves high accuracy on domain-consistent data, but degrades when the domain-gap is re-introduced.

Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.