Search Results for author: Michael Zollhoefer

Found 29 papers, 10 papers with code

A Local Appearance Model for Volumetric Capture of Diverse Hairstyle

no code implementations14 Dec 2023 Ziyan Wang, Giljoo Nam, Aljaz Bozic, Chen Cao, Jason Saragih, Michael Zollhoefer, Jessica Hodgins

In this paper, we present a novel method for creating high-fidelity avatars with diverse hairstyles.

HDHumans: A Hybrid Approach for High-fidelity Digital Humans

no code implementations21 Oct 2022 Marc Habermann, Lingjie Liu, Weipeng Xu, Gerard Pons-Moll, Michael Zollhoefer, Christian Theobalt

Photo-real digital human avatars are of enormous importance in graphics, as they enable immersive communication over the globe, improve gaming and entertainment experiences, and can be particularly beneficial for AR and VR settings.

Novel View Synthesis Surface Reconstruction +1

Neural Pixel Composition: 3D-4D View Synthesis from Multi-Views

no code implementations21 Jul 2022 Aayush Bansal, Michael Zollhoefer

We present Neural Pixel Composition (NPC), a novel approach for continuous 3D-4D view synthesis given only a discrete set of multi-view observations as input.

3D Reconstruction

KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints

1 code implementation10 May 2022 Marko Mihajlovic, Aayush Bansal, Michael Zollhoefer, Siyu Tang, Shunsuke Saito

In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric humans from sparse views.

3D Face Reconstruction 3D Human Reconstruction +2

Mutual Scene Synthesis for Mixed Reality Telepresence

no code implementations1 Apr 2022 Mohammad Keshavarzi, Michael Zollhoefer, Allen Y. Yang, Patrick Peluse, Luisa Caldas

Remote telepresence via next-generation mixed reality platforms can provide higher levels of immersion for computer-mediated communications, allowing participants to engage in a wide spectrum of activities, previously not possible in 2D screen-based communication methods.

Mixed Reality

HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture

no code implementations CVPR 2022 Ziyan Wang, Giljoo Nam, Tuur Stuyck, Stephen Lombardi, Michael Zollhoefer, Jessica Hodgins, Christoph Lassner

Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance. Yet, hair is a critical component for believable avatars.

Neural Rendering Optical Flow Estimation

Learning Neural Light Fields with Ray-Space Embedding Networks

1 code implementation2 Dec 2021 Benjamin Attal, Jia-Bin Huang, Michael Zollhoefer, Johannes Kopf, Changil Kim

Our method supports rendering with a single network evaluation per pixel for small baseline light field datasets and can also be applied to larger baselines with only a few evaluations per pixel.

A Deeper Look into DeepCap

no code implementations20 Nov 2021 Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt

Human performance capture is a highly important computer vision problem with many applications in movie production and virtual/augmented reality.

Pose Estimation

NRST: Non-rigid Surface Tracking from Monocular Video

no code implementations6 Jul 2021 Marc Habermann, Weipeng Xu, Helge Rhodin, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt

Our texture term exploits the orientation information in the micro-structures of the objects, e. g., the yarn patterns of fabrics.

Real-time Deep Dynamic Characters

no code implementations4 May 2021 Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt

We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multi-view imagery.

MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement

2 code implementations ICCV 2021 Alexander Richard, Michael Zollhoefer, Yandong Wen, Fernando de la Torre, Yaser Sheikh

To improve upon existing models, we propose a generic audio-driven facial animation approach that achieves highly realistic motion synthesis results for the entire face.

3D Face Animation Disentanglement +1

Neural 3D Video Synthesis from Multi-view Video

1 code implementation CVPR 2022 Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard Newcombe, Zhaoyang Lv

We propose a novel approach for 3D video synthesis that is able to represent multi-view video recordings of a dynamic real-world scene in a compact, yet expressive representation that enables high-quality view synthesis and motion interpolation.

Motion Interpolation

Mixture of Volumetric Primitives for Efficient Neural Rendering

1 code implementation2 Mar 2021 Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, Jason Saragih

Real-time rendering and animation of humans is a core function in games, movies, and telepresence applications.

Neural Rendering

PVA: Pixel-aligned Volumetric Avatars

no code implementations7 Jan 2021 Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi

Volumetric models typically employ a global code to represent facial expressions, such that they can be driven by a small set of animation parameters.

DeepCap: Monocular Human Performance Capture Using Weak Supervision

no code implementations CVPR 2020 Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt

Human performance capture is a highly important computer vision problem with many applications in movie production and virtual/augmented reality.

Pose Estimation

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation

no code implementations14 Jan 2020 Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt

In this paper, we propose a novel human video synthesis method that approaches these limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space.

Image-to-Image Translation Novel View Synthesis +1

Real-Time Global Illumination Decomposition of Videos

no code implementations6 Aug 2019 Abhimitra Meka, Mohammad Shafiei, Michael Zollhoefer, Christian Richardt, Christian Theobalt

We propose the first approach for the decomposition of a monocular color video into direct and indirect illumination components in real time.

Neural Rendering and Reenactment of Human Actor Videos

no code implementations11 Sep 2018 Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt

In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person.

Generative Adversarial Network Image Generation +1

Mo2Cap2: Real-time Mobile 3D Motion Capture with a Cap-mounted Fisheye Camera

no code implementations15 Mar 2018 Weipeng Xu, Avishek Chatterjee, Michael Zollhoefer, Helge Rhodin, Pascal Fua, Hans-Peter Seidel, Christian Theobalt

We tackle these challenges based on a novel lightweight setup that converts a standard baseball cap to a device for high-quality pose estimation based on a single cap-mounted fisheye camera.

Ranked #6 on Egocentric Pose Estimation on GlobalEgoMocap Test Dataset (using extra training data)

3D Pose Estimation Egocentric Pose Estimation

LIME: Live Intrinsic Material Estimation

no code implementations CVPR 2018 Abhimitra Meka, Maxim Maximov, Michael Zollhoefer, Avishek Chatterjee, Hans-Peter Seidel, Christian Richardt, Christian Theobalt

We present the first end to end approach for real time material estimation for general object shapes with uniform material that only requires a single color image as input.

Foreground Segmentation Image-to-Image Translation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.