Search Results for author: Jessica Hodgins

Found 9 papers, 1 papers with code

A Local Appearance Model for Volumetric Capture of Diverse Hairstyle

no code implementations14 Dec 2023 Ziyan Wang, Giljoo Nam, Aljaz Bozic, Chen Cao, Jason Saragih, Michael Zollhoefer, Jessica Hodgins

In this paper, we present a novel method for creating high-fidelity avatars with diverse hairstyles.

Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing

no code implementations30 Jun 2022 Donglai Xiang, Timur Bagautdinov, Tuur Stuyck, Fabian Prada, Javier Romero, Weipeng Xu, Shunsuke Saito, Jingfan Guo, Breannan Smith, Takaaki Shiratori, Yaser Sheikh, Jessica Hodgins, Chenglei Wu

The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at training time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry.

HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture

no code implementations CVPR 2022 Ziyan Wang, Giljoo Nam, Tuur Stuyck, Stephen Lombardi, Michael Zollhoefer, Jessica Hodgins, Christoph Lassner

Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance. Yet, hair is a critical component for believable avatars.

Neural Rendering Optical Flow Estimation

Modeling Clothing as a Separate Layer for an Animatable Human Avatar

no code implementations28 Jun 2021 Donglai Xiang, Fabian Prada, Timur Bagautdinov, Weipeng Xu, Yuan Dong, He Wen, Jessica Hodgins, Chenglei Wu

To address these difficulties, we propose a method to build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos.

Inverse Rendering

Learning Compositional Radiance Fields of Dynamic Human Heads

1 code implementation CVPR 2021 Ziyan Wang, Timur Bagautdinov, Stephen Lombardi, Tomas Simon, Jason Saragih, Jessica Hodgins, Michael Zollhöfer

In addition, we show that the learned dynamic radiance field can be used to synthesize novel unseen expressions based on a global animation code.

Neural Rendering Synthetic Data Generation

Batteries, camera, action! Learning a semantic control space for expressive robot cinematography

no code implementations19 Nov 2020 Rogerio Bonatti, Arthur Bucker, Sebastian Scherer, Mustafa Mukadam, Jessica Hodgins

First, we generate a database of video clips with a diverse range of shots in a photo-realistic simulator, and use hundreds of participants in a crowd-sourcing framework to obtain scores for a set of semantic descriptors for each clip.

MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video

no code implementations22 Sep 2020 Donglai Xiang, Fabian Prada, Chenglei Wu, Jessica Hodgins

A differentiable renderer is utilized to align our captured shapes to the input frames by minimizing the difference in both silhouette, segmentation, and texture.

Surface Reconstruction valid

Cannot find the paper you are looking for? You can Submit a new open access paper.