Search Results for author: Michael Zollhöfer

Found 48 papers, 17 papers with code

VolumeDeform: Real-time Volumetric Non-rigid Reconstruction

no code implementations27 Mar 2016 Matthias Innmann, Michael Zollhöfer, Matthias Nießner, Christian Theobalt, Marc Stamminger

We cast finding the optimal deformation of space as a non-linear regularized variational optimization problem by enforcing local smoothness and proximity to the input constraints.

BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration

1 code implementation5 Apr 2016 Angela Dai, Matthias Nießner, Michael Zollhöfer, Shahram Izadi, Christian Theobalt

Our approach estimates globally optimized (i. e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i. e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework.

3D Reconstruction Mixed Reality +1

Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging

no code implementations22 Apr 2016 Zachary DeVito, Michael Mara, Michael Zollhöfer, Gilbert Bernstein, Jonathan Ragan-Kelley, Christian Theobalt, Pat Hanrahan, Matthew Fisher, Matthias Nießner

Many graphics and vision problems can be expressed as non-linear least squares optimizations of objective functions over visual data, such as images and meshes.

FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality

no code implementations11 Oct 2016 Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner

Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances.

Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input

no code implementations16 Oct 2016 Srinath Sridhar, Franziska Mueller, Michael Zollhöfer, Dan Casas, Antti Oulasvirta, Christian Theobalt

However, due to difficult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately.

Object Object Tracking

Real-time Halfway Domain Reconstruction of Motion and Geometry

no code implementations23 Oct 2016 Lucas Thies, Michael Zollhöfer, Christian Richardt, Christian Theobalt, Günther Greiner

Our extensive experiments and evaluations show that our approach produces high-quality dense reconstructions of 3D geometry and scene flow at real-time frame rates, and compares favorably to the state of the art.

MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

no code implementations ICCV 2017 Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Pérez, Christian Theobalt

In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image.

Face Reconstruction Monocular Reconstruction

InverseFaceNet: Deep Monocular Inverse Face Rendering

no code implementations CVPR 2018 Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt

In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created training corpus.

Face Reconstruction Inverse Rendering

MonoPerfCap: Human Performance Capture from Monocular Video

no code implementations7 Aug 2017 Weipeng Xu, Avishek Chatterjee, Michael Zollhöfer, Helge Rhodin, Dushyant Mehta, Hans-Peter Seidel, Christian Theobalt

Reconstruction from monocular video alone is drastically more challenging, since strong occlusions and the inherent depth ambiguity lead to a highly ill-posed reconstruction problem.

Monocular Reconstruction Pose Estimation +1

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

no code implementations CVPR 2018 Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, Christian Theobalt

To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model.

Face Model Monocular Reconstruction

HeadOn: Real-time Reenactment of Human Portrait Videos

no code implementations29 May 2018 Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner

We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze.

Deep Video Portraits

no code implementations29 May 2018 Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt

In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target.

Face Model

IGNOR: Image-guided Neural Object Rendering

no code implementations26 Nov 2018 Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner

Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering.

Image Generation Novel View Synthesis +1

DeepVoxels: Learning Persistent 3D Feature Embeddings

1 code implementation CVPR 2019 Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, Michael Zollhöfer

In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis.

3D Reconstruction Novel View Synthesis

FML: Face Model Learning from Videos

no code implementations CVPR 2019 Ayush Tewari, Florian Bernard, Pablo Garrido, Gaurav Bharaj, Mohamed Elgharib, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, Christian Theobalt

In contrast, we propose multi-frame video-based self-supervised training of a deep network that (i) learns a face identity model both in shape and appearance while (ii) jointly learning to reconstruct 3D faces.

3D Reconstruction Face Model

Commodity RGB-D Sensors: Data Acquisition

no code implementations18 Feb 2019 Michael Zollhöfer

Over the past ten years we have seen a democratization of range sensing technology.

Dynamic Reconstruction

Deferred Neural Rendering: Image Synthesis using Neural Textures

4 code implementations28 Apr 2019 Justus Thies, Michael Zollhöfer, Matthias Nießner

Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.

Image Generation Neural Rendering +1

Text-based Editing of Talking-head Video

1 code implementation4 Jun 2019 Ohad Fried, Ayush Tewari, Michael Zollhöfer, Adam Finkelstein, Eli Shechtman, Dan B. Goldman, Kyle Genova, Zeyu Jin, Christian Theobalt, Maneesh Agrawala

To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material.

Face Model Sentence +3

Neural Style-Preserving Visual Dubbing

no code implementations5 Sep 2019 Hyeongwoo Kim, Mohamed Elgharib, Michael Zollhöfer, Hans-Peter Seidel, Thabo Beeler, Christian Richardt, Christian Theobalt

We present a style-preserving visual dubbing approach from single video inputs, which maintains the signature style of target actors when modifying facial expressions, including mouth motions, to match foreign languages.

Generative Adversarial Network

DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data

1 code implementation9 Dec 2019 Aljaž Božič, Michael Zollhöfer, Christian Theobalt, Matthias Nießner

Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus.

3D Reconstruction RGB-D Reconstruction

Image-guided Neural Object Rendering

no code implementations ICLR 2020 Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner

Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering.

Image Generation Object

StyleRig: Rigging StyleGAN for 3D Control over Portrait Images

no code implementations CVPR 2020 Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, Christian Theobalt

StyleGAN generates photorealistic portrait images of faces with eyes, teeth, hair and context (neck, shoulders, background), but lacks a rig-like control over semantic face parameters that are interpretable in 3D, such as face pose, expressions, and scene illumination.

State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

BIG-bench Machine Learning Image Generation +2

Neural Non-Rigid Tracking

1 code implementation NeurIPS 2020 Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Angela Dai, Justus Thies, Matthias Nießner

We introduce a novel, end-to-end learnable, differentiable non-rigid tracker that enables state-of-the-art non-rigid reconstruction by a learned robust optimization.

Face2Face: Real-time Face Capture and Reenactment of RGB Videos

2 code implementations CVPR 2016 Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner

Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion.

PIE: Portrait Image Embedding for Semantic Control

no code implementations20 Sep 2020 Ayush Tewari, Mohamed Elgharib, Mallikarjun B R., Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, Christian Theobalt

We present the first approach for embedding real portrait images in the latent space of StyleGAN, which allows for intuitive editing of the head pose, facial expression, and scene illumination in the image.

Face Model

Learning Compositional Radiance Fields of Dynamic Human Heads

1 code implementation CVPR 2021 Ziyan Wang, Timur Bagautdinov, Stephen Lombardi, Tomas Simon, Jason Saragih, Jessica Hodgins, Michael Zollhöfer

In addition, we show that the learned dynamic radiance field can be used to synthesize novel unseen expressions based on a global animation code.

Neural Rendering Synthetic Data Generation

Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video

2 code implementations ICCV 2021 Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, Christian Theobalt

We show that a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, e. g. a `bullet-time' video effect.

Novel View Synthesis Video Editing

Learning Neural Light Fields With Ray-Space Embedding

no code implementations CVPR 2022 Benjamin Attal, Jia-Bin Huang, Michael Zollhöfer, Johannes Kopf, Changil Kim

Our method supports rendering with a single network evaluation per pixel for small baseline light fields and with only a few evaluations per pixel for light fields with larger baselines.

Self-supervised Neural Articulated Shape and Appearance Models

no code implementations CVPR 2022 Fangyin Wei, Rohan Chabra, Lingni Ma, Christoph Lassner, Michael Zollhöfer, Szymon Rusinkiewicz, Chris Sweeney, Richard Newcombe, Mira Slavcheva

In addition, our representation enables a large variety of applications, such as few-shot reconstruction, the generation of novel articulations, and novel view-synthesis.

Novel View Synthesis

AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance Fields

1 code implementation21 Jul 2022 Andreas Kurz, Thomas Neff, Zhaoyang Lv, Michael Zollhöfer, Markus Steinberger

However, rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.

Novel View Synthesis

Neural Pixel Composition for 3D-4D View Synthesis From Multi-Views

no code implementations CVPR 2023 Aayush Bansal, Michael Zollhöfer

We present Neural Pixel Composition (NPC), a novel approach for continuous 3D-4D view synthesis given only a discrete set of multi-view observations as input.

Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering

1 code implementation30 Sep 2023 Linjie Lyu, Ayush Tewari, Marc Habermann, Shunsuke Saito, Michael Zollhöfer, Thomas Leimkühler, Christian Theobalt

We further conduct an extensive comparative study of different priors on illumination used in previous work on inverse rendering.

Denoising Inverse Rendering

VR-NeRF: High-Fidelity Virtualized Walkable Spaces

no code implementations5 Nov 2023 Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Aljaž Božič, Dahua Lin, Michael Zollhöfer, Christian Richardt

We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.

2k

Drivable 3D Gaussian Avatars

no code implementations14 Nov 2023 Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, Javier Romero

We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats.

PyNeRF: Pyramidal Neural Radiance Fields

1 code implementation NeurIPS 2023 Haithem Turki, Michael Zollhöfer, Christian Richardt, Deva Ramanan

Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster.

SpecNeRF: Gaussian Directional Encoding for Specular Reflections

no code implementations20 Dec 2023 Li Ma, Vasu Agrawal, Haithem Turki, Changil Kim, Chen Gao, Pedro Sander, Michael Zollhöfer, Christian Richardt

We show that our Gaussian directional encoding and geometry prior significantly improve the modeling of challenging specular reflections in neural radiance fields, which helps decompose appearance into more physically meaningful components.

URHand: Universal Relightable Hands

no code implementations10 Jan 2024 Zhaoxi Chen, Gyeongsik Moon, Kaiwen Guo, Chen Cao, Stanislav Pidhorskyi, Tomas Simon, Rohan Joshi, Yuan Dong, Yichen Xu, Bernardo Pires, He Wen, Lucas Evans, Bo Peng, Julia Buffalini, Autumn Trimble, Kevyn McPhail, Melissa Schoeller, Shoou-I Yu, Javier Romero, Michael Zollhöfer, Yaser Sheikh, Ziwei Liu, Shunsuke Saito

To simplify the personalization process while retaining photorealism, we build a powerful universal relightable prior based on neural relighting from multi-view images of hands captured in a light stage with hundreds of identities.

IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images

no code implementations23 Jan 2024 Zhi-Hao Lin, Jia-Bin Huang, Zhengqin Li, Zhao Dong, Christian Richardt, Tuotuo Li, Michael Zollhöfer, Johannes Kopf, Shenlong Wang, Changil Kim

While numerous 3D reconstruction and novel-view synthesis methods allow for photorealistic rendering of a scene from multi-view images easily captured with consumer cameras, they bake illumination in their representations and fall short of supporting advanced applications like material editing, relighting, and virtual object insertion.

3D Reconstruction Inverse Rendering +1

ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models

1 code implementation4 Mar 2024 Lukas Höllein, Aljaž Božič, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, Matthias Nießner

In this paper, we present a method that leverages pretrained text-to-image models as a prior, and learn to generate multi-view images in a single denoising process from real-world data.

Denoising Image Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.