Search Results for author: Timur Bagautdinov

Found 21 papers, 5 papers with code

From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations

1 code implementation3 Jan 2024 Evonne Ng, Javier Romero, Timur Bagautdinov, Shaojie Bai, Trevor Darrell, Angjoo Kanazawa, Alexander Richard

We present a framework for generating full-bodied photorealistic avatars that gesture according to the conversational dynamics of a dyadic interaction.

Quantization

Drivable 3D Gaussian Avatars

no code implementations14 Nov 2023 Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, Javier Romero

We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats.

NPC: Neural Point Characters from Video

no code implementations ICCV 2023 Shih-Yang Su, Timur Bagautdinov, Helge Rhodin

Previous methods avoid using a template but rely on a costly or ill-posed mapping from observation to canonical space.

RelightableHands: Efficient Neural Relighting of Articulated Hand Models

no code implementations CVPR 2023 Shun Iwase, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Timur Bagautdinov, Rohan Joshi, Fabian Prada, Takaaki Shiratori, Yaser Sheikh, Jason Saragih

To achieve generalization, we condition the student model with physics-inspired illumination features such as visibility, diffuse shading, and specular reflections computed on a coarse proxy geometry, maintaining a small computational overhead.

Drivable Volumetric Avatars using Texel-Aligned Features

no code implementations20 Jul 2022 Edoardo Remelli, Timur Bagautdinov, Shunsuke Saito, Tomas Simon, Chenglei Wu, Shih-En Wei, Kaiwen Guo, Zhe Cao, Fabian Prada, Jason Saragih, Yaser Sheikh

To circumvent this, we propose a novel volumetric avatar representation by extending mixtures of volumetric primitives to articulated objects.

Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing

no code implementations30 Jun 2022 Donglai Xiang, Timur Bagautdinov, Tuur Stuyck, Fabian Prada, Javier Romero, Weipeng Xu, Shunsuke Saito, Jingfan Guo, Breannan Smith, Takaaki Shiratori, Yaser Sheikh, Jessica Hodgins, Chenglei Wu

The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at training time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry.

DANBO: Disentangled Articulated Neural Body Representations via Graph Neural Networks

no code implementations3 May 2022 Shih-Yang Su, Timur Bagautdinov, Helge Rhodin

While a few such approaches exist, those have limited generalization capabilities and are prone to learning spurious (chance) correlations between irrelevant body parts, resulting in implausible deformations and missing body parts on unseen poses.

Image Generation

Modeling Clothing as a Separate Layer for an Animatable Human Avatar

no code implementations28 Jun 2021 Donglai Xiang, Fabian Prada, Timur Bagautdinov, Weipeng Xu, Yuan Dong, He Wen, Jessica Hodgins, Chenglei Wu

To address these difficulties, we propose a method to build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos.

Inverse Rendering

DeepMesh: Differentiable Iso-Surface Extraction

no code implementations20 Jun 2021 Benoit Guillard, Edoardo Remelli, Artem Lukoianov, Stephan R. Richter, Timur Bagautdinov, Pierre Baque, Pascal Fua

Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples with respect to the underlying deep implicit field.

3D Reconstruction Single-View 3D Reconstruction

Driving-Signal Aware Full-Body Avatars

no code implementations21 May 2021 Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabian Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, Jason Saragih

The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation.

Imputation

Learning Compositional Radiance Fields of Dynamic Human Heads

1 code implementation CVPR 2021 Ziyan Wang, Timur Bagautdinov, Stephen Lombardi, Tomas Simon, Jason Saragih, Jessica Hodgins, Michael Zollhöfer

In addition, we show that the learned dynamic radiance field can be used to synthesize novel unseen expressions based on a global animation code.

Neural Rendering Synthetic Data Generation

Masksembles for Uncertainty Estimation

3 code implementations CVPR 2021 Nikita Durasov, Timur Bagautdinov, Pierre Baque, Pascal Fua

Our central intuition is that there is a continuous spectrum of ensemble-like models of which MC-Dropout and Deep Ensembles are extreme examples.

Classifier calibration Ensemble Learning +3

MeshSDF: Differentiable Iso-Surface Extraction

1 code implementation NeurIPS 2020 Edoardo Remelli, Artem Lukoianov, Stephan R. Richter, Benoît Guillard, Timur Bagautdinov, Pierre Baque, Pascal Fua

Unfortunately, these methods are often not suitable for applications that require an explicit mesh-based surface representation because converting an implicit field to such a representation relies on the Marching Cubes algorithm, which cannot be differentiated with respect to the underlying implicit field.

Modeling Facial Geometry Using Compositional VAEs

no code implementations CVPR 2018 Timur Bagautdinov, Chenglei Wu, Jason Saragih, Pascal Fua, Yaser Sheikh

We propose a method for learning non-linear face geometry representations using deep generative models.

Probability Occupancy Maps for Occluded Depth Images

no code implementations CVPR 2015 Timur Bagautdinov, Francois Fleuret, Pascal Fua

We propose a novel approach to computing the probabilities of presence of multiple and potentially occluding objects in a scene from a single depth map.

Cannot find the paper you are looking for? You can Submit a new open access paper.