Search Results for author: Fabian Prada

Found 9 papers, 0 papers with code

Diffusion Shape Prior for Wrinkle-Accurate Cloth Registration

no code implementations10 Nov 2023 Jingfan Guo, Fabian Prada, Donglai Xiang, Javier Romero, Chenglei Wu, Hyun Soo Park, Takaaki Shiratori, Shunsuke Saito

Registering clothes from 4D scans with vertex-accurate correspondence is challenging, yet important for dynamic appearance modeling and physics parameter estimation from real-world data.

RelightableHands: Efficient Neural Relighting of Articulated Hand Models

no code implementations CVPR 2023 Shun Iwase, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Timur Bagautdinov, Rohan Joshi, Fabian Prada, Takaaki Shiratori, Yaser Sheikh, Jason Saragih

To achieve generalization, we condition the student model with physics-inspired illumination features such as visibility, diffuse shading, and specular reflections computed on a coarse proxy geometry, maintaining a small computational overhead.

Drivable Volumetric Avatars using Texel-Aligned Features

no code implementations20 Jul 2022 Edoardo Remelli, Timur Bagautdinov, Shunsuke Saito, Tomas Simon, Chenglei Wu, Shih-En Wei, Kaiwen Guo, Zhe Cao, Fabian Prada, Jason Saragih, Yaser Sheikh

To circumvent this, we propose a novel volumetric avatar representation by extending mixtures of volumetric primitives to articulated objects.

Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing

no code implementations30 Jun 2022 Donglai Xiang, Timur Bagautdinov, Tuur Stuyck, Fabian Prada, Javier Romero, Weipeng Xu, Shunsuke Saito, Jingfan Guo, Breannan Smith, Takaaki Shiratori, Yaser Sheikh, Jessica Hodgins, Chenglei Wu

The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at training time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry.

Modeling Clothing as a Separate Layer for an Animatable Human Avatar

no code implementations28 Jun 2021 Donglai Xiang, Fabian Prada, Timur Bagautdinov, Weipeng Xu, Yuan Dong, He Wen, Jessica Hodgins, Chenglei Wu

To address these difficulties, we propose a method to build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos.

Inverse Rendering

Driving-Signal Aware Full-Body Avatars

no code implementations21 May 2021 Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabian Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, Jason Saragih

The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation.

Imputation

MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video

no code implementations22 Sep 2020 Donglai Xiang, Fabian Prada, Chenglei Wu, Jessica Hodgins

A differentiable renderer is utilized to align our captured shapes to the input frames by minimizing the difference in both silhouette, segmentation, and texture.

Surface Reconstruction valid

Cannot find the paper you are looking for? You can Submit a new open access paper.