no code implementations • 12 Jan 2025 • Wojciech Zielonka, Stephan J. Garbin, Alexandros Lattas, George Kopanas, Paulo Gotardo, Thabo Beeler, Justus Thies, Timo Bolkart
With few input images, SynShot fine-tunes the pretrained synthetic prior to bridge the domain gap, modeling a photorealistic head avatar that generalizes to novel expressions and viewpoints.
no code implementations • 27 Nov 2024 • Shivangi Aneja, Artem Sevastopolsky, Tobias Kirschstein, Justus Thies, Angela Dai, Matthias Nießner
We introduce GaussianSpeech, a novel approach that synthesizes high-fidelity animation sequences of photo-realistic, personalized 3D human head avatars from spoken audio.
no code implementations • 21 Oct 2024 • Malte Prinzler, Egor Zakharov, Vanessa Sklyarova, Berna Kabadayi, Justus Thies
We introduce Joker, a new method for the conditional synthesis of 3D human heads with extreme expressions.
no code implementations • 26 Sep 2024 • Mirela Ostrek, Justus Thies
Rapid advances in the field of generative AI and text-to-image methods in particular have transformed the way we interact with and perceive computer-generated imagery today.
no code implementations • 23 Sep 2024 • Egor Zakharov, Vanessa Sklyarova, Michael Black, Giljoo Nam, Justus Thies, Otmar Hilliges
We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians to produce accurate and realistic strand-based reconstructions from multi-view data.
no code implementations • 5 Jul 2024 • Wojciech Zielonka, Timo Bolkart, Thabo Beeler, Justus Thies
Building on the success of mesh-based 3D morphable face models (3DMM), we define GEM as an ensemble of linear eigenbases for representing the head appearance of a specific subject.
no code implementations • 16 Apr 2024 • Hongwei Yi, Justus Thies, Michael J. Black, Xue Bin Peng, Davis Rempe
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model, emphasizing goal-reaching constraints on large-scale motion-capture datasets.
no code implementations • CVPR 2024 • Vanessa Sklyarova, Egor Zakharov, Otmar Hilliges, Michael J. Black, Justus Thies
We present HAAR a new strand-based generative model for 3D human hairstyles.
no code implementations • 22 Dec 2023 • Mirela Ostrek, Carol O'Sullivan, Michael J. Black, Justus Thies
We present ESP, a novel method for context-aware full-body generation, that enables photo-realistic synthesis and inpainting of people wearing clothing that is semantically appropriate for the scene depicted in an input photograph.
1 code implementation • 18 Dec 2023 • Vanessa Sklyarova, Egor Zakharov, Otmar Hilliges, Michael J. Black, Justus Thies
We present HAAR, a new strand-based generative model for 3D human hairstyles.
1 code implementation • CVPR 2024 • Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nießner
We propose a new latent diffusion model for this task, operating in the expression space of neural parametric head models, to synthesize audio-driven realistic head sequences.
no code implementations • 8 Dec 2023 • Jalees Nehvi, Berna Kabadayi, Julien Valentin, Justus Thies
In contrast to this, we propose a template-based tracking of the torso, head and facial expressions which allows us to cover the appearance of a human subject from all sides.
no code implementations • CVPR 2024 • Jiapeng Tang, Angela Dai, Yinyu Nie, Lev Markhasin, Justus Thies, Matthias Niessner
We introduce Diffusion Parametric Head Models (DPHMs), a generative model that enables robust volumetric head reconstruction and tracking from monocular depth sequences.
no code implementations • 1 Dec 2023 • Balamurugan Thambiraja, Sadegh Aliakbarian, Darren Cosker, Justus Thies
To enable stochasticity as well as motion editing, we propose a lightweight audio-conditioned diffusion model for 3D facial motion.
no code implementations • 22 Nov 2023 • Berna Kabadayi, Wojciech Zielonka, Bharat Lal Bhatnagar, Gerard Pons-Moll, Justus Thies
For controlling the model, we learn a mapping from 3DMM facial expression parameters to the latent space of the generative model.
no code implementations • 14 Nov 2023 • Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, Javier Romero
We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats.
no code implementations • 13 Sep 2023 • Hao Zhang, Yao Feng, Peter Kulits, Yandong Wen, Justus Thies, Michael J. Black
We argue that existing methods are limited because they employ a monolithic modeling approach, using a single representation for the head, face, hair, and accessories.
no code implementations • CVPR 2024 • Soubhik Sanyal, Partha Ghosh, Jinlong Yang, Michael J. Black, Justus Thies, Timo Bolkart
We use intermediate activations of the learned geometry model to condition our texture generator.
no code implementations • 21 Aug 2023 • Tingting Liao, Hongwei Yi, Yuliang Xiu, Jiaxaing Tang, Yangyi Huang, Justus Thies, Michael J. Black
We introduce TADA, a simple-yet-effective approach that takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures, that can be animated and rendered with traditional graphics pipelines.
1 code implementation • 16 Aug 2023 • Yangyi Huang, Hongwei Yi, Yuliang Xiu, Tingting Liao, Jiaxiang Tang, Deng Cai, Justus Thies
But how to effectively capture all visual attributes of an individual from a single image, which are sufficient to reconstruct unseen areas (e. g., the back view)?
no code implementations • ICCV 2023 • Zhaoqi Su, Liangxiao Hu, Siyou Lin, Hongwen Zhang, Shengping Zhang, Justus Thies, Yebin Liu
In contrast to previous work on 3D avatar reconstruction, our method is able to generalize to novel poses with realistic dynamic cloth deformations.
1 code implementation • CVPR 2024 • Jiapeng Tang, Yinyu Nie, Lev Markhasin, Angela Dai, Justus Thies, Matthias Nießner
We introduce a diffusion network to synthesize a collection of 3D indoor objects by denoising a set of unordered object attributes.
no code implementations • ICCV 2023 • Balamurugan Thambiraja, Ikhsanul Habibie, Sadegh Aliakbarian, Darren Cosker, Christian Theobalt, Justus Thies
To address this, we present Imitator, a speech-driven facial expression synthesis method, which learns identity-specific details from a short input video and produces novel facial expressions matching the identity-specific speaking style and facial idiosyncrasies of the target actor.
no code implementations • CVPR 2023 • Hongwei Yi, Chun-Hao P. Huang, Shashank Tripathi, Lea Hering, Justus Thies, Michael J. Black
We propose MIME (Mining Interaction and Movement to infer 3D Environments), which is a generative model of indoor scenes that produces furniture layouts that are consistent with the human movement.
Ranked #2 on Indoor Scene Synthesis on PRO-teXt
2D Semantic Segmentation task 1 (8 classes) 3D Semantic Scene Completion +2
no code implementations • CVPR 2023 • Dejan Azinović, Olivier Maury, Christophe Hery, Mathias Nießner, Justus Thies
We propose a novel method for high-quality facial texture reconstruction from RGB images using a novel capturing routine based on a single smartphone which we equip with an inexpensive polarization foil.
1 code implementation • 2 Dec 2022 • Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nießner
Controllable editing and manipulation are given by language prompts to adapt texture and expression of the 3D morphable model.
no code implementations • CVPR 2023 • Malte Prinzler, Otmar Hilliges, Justus Thies
We present Depth-aware Image-based NEural Radiance fields (DINER).
1 code implementation • CVPR 2023 • Wojciech Zielonka, Timo Bolkart, Justus Thies
In addition, it allows for the interactive rendering of novel poses and expressions.
no code implementations • 11 Oct 2022 • Jiapeng Tang, Lev Markhasin, Bi Wang, Justus Thies, Matthias Nießner
To this end, we introduce transformer-based deformation networks that represent a shape deformation as a composition of local surface deformations.
1 code implementation • 13 Apr 2022 • Wojciech Zielonka, Timo Bolkart, Justus Thies
To this end, we take advantage of a face recognition network pretrained on a large-scale 2D image dataset, which provides distinct features for different faces and is robust to expression, illumination, and camera changes.
Ranked #2 on 3D Face Reconstruction on NoW Benchmark
no code implementations • 5 Apr 2022 • Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
Texture cues on 3D objects are key to compelling visual representations, with the possibility to create high visual fidelity with inherent spatial consistency across different views.
1 code implementation • CVPR 2022 • Hongwei Yi, Chun-Hao P. Huang, Dimitrios Tzionas, Muhammed Kocabas, Mohamed Hassan, Siyu Tang, Justus Thies, Michael J. Black
In fact, we demonstrate that these human-scene interactions (HSIs) can be leveraged to improve the 3D reconstruction of a scene from a monocular RGB video.
no code implementations • CVPR 2022 • Philip-William Grassal, Malte Prinzler, Titus Leistner, Carsten Rother, Matthias Nießner, Justus Thies
We present Neural Head Avatars, a novel neural representation that explicitly models the surface geometry and appearance of an animatable human avatar that can be used for teleconferencing in AR/VR or other applications in the movie or games industry that rely on a digital human.
1 code implementation • 10 Nov 2021 • Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul Srinivasan, Edgar Tretschk, Yifan Wang, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, Tomas Simon, Christian Theobalt, Matthias Niessner, Jonathan T. Barron, Gordon Wetzstein, Michael Zollhoefer, Vladislav Golyanik
The reconstruction of such a scene representation from observations using differentiable rendering losses is known as inverse graphics or inverse rendering.
no code implementations • 7 Jul 2021 • Mohamed Elgharib, Mohit Mendiratta, Justus Thies, Matthias Nießner, Hans-Peter Seidel, Ayush Tewari, Vladislav Golyanik, Christian Theobalt
Even holding a mobile phone camera in the front of the face while sitting for a long duration is not convenient.
1 code implementation • NeurIPS 2021 • Aljaž Božič, Pablo Palafox, Justus Thies, Angela Dai, Matthias Nießner
We introduce TransformerFusion, a transformer-based 3D scene reconstruction approach.
2 code implementations • CVPR 2022 • Dejan Azinović, Ricardo Martin-Brualla, Dan B Goldman, Matthias Nießner, Justus Thies
Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR.
1 code implementation • ICCV 2021 • Andrei Burov, Matthias Nießner, Justus Thies
To this end, we explicitly model the surface of the person using a multi-layer perceptron (MLP) which is embedded into the canonical space of the SMPL body model.
1 code implementation • ICCV 2021 • Pablo Palafox, Aljaž Božič, Justus Thies, Matthias Nießner, Angela Dai
Crucially, once learned, our neural parametric models of shape and pose enable optimization over the learned spaces to fit to new observations, similar to the fitting of a traditional parametric model, e. g., SMPL.
1 code implementation • ICCV 2021 • Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
3D reconstruction of large scenes is a challenging problem due to the high-complexity nature of the solution space, in particular for generative neural networks.
1 code implementation • CVPR 2021 • Guy Gafni, Justus Thies, Michael Zollhöfer, Matthias Nießner
We present dynamic neural radiance fields for modeling the appearance and dynamics of a human face.
1 code implementation • ICCV 2021 • Davide Cozzolino, Andreas Rössler, Justus Thies, Matthias Nießner, Luisa Verdoliva
A major challenge in DeepFake forgery detection is that state-of-the-art algorithms are mostly trained to detect a specific fake method.
1 code implementation • CVPR 2021 • Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Justus Thies, Angela Dai, Matthias Nießner
We introduce Neural Deformation Graphs for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.
2 code implementations • CVPR 2016 • Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner
Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion.
1 code implementation • 20 Jul 2020 • Sebastian Weiss, Mustafa Işık, Justus Thies, Rüdiger Westermann
A central challenge in data visualization is to understand which data samples are required to generate an image of a data set in which the relevant information is encoded.
no code implementations • 29 Jun 2020 • Hassan Abu Alhaija, Siva Karthik Mustikovela, Justus Thies, Varun Jampani, Matthias Nießner, Andreas Geiger, Carsten Rother
Neural rendering techniques promise efficient photo-realistic image synthesis while at the same time providing rich control over scene parameters by learning the physical image formation process.
1 code implementation • CVPR 2021 • Angela Dai, Yawar Siddiqui, Justus Thies, Julien Valentin, Matthias Nießner
We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion.
1 code implementation • NeurIPS 2020 • Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Angela Dai, Justus Thies, Matthias Nießner
We introduce a novel, end-to-end learnable, differentiable non-rigid tracker that enables state-of-the-art non-rigid reconstruction by a learned robust optimization.
no code implementations • 8 Apr 2020 • Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer
Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.
1 code implementation • CVPR 2020 • Jingwei Huang, Justus Thies, Angela Dai, Abhijit Kundu, Chiyu Max Jiang, Leonidas Guibas, Matthias Nießner, Thomas Funkhouser
In this work, we present a novel approach for color texture generation using a conditional adversarial loss obtained from weakly-supervised views.
no code implementations • ICLR 2020 • Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner
Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering.
1 code implementation • ECCV 2020 • Justus Thies, Mohamed Elgharib, Ayush Tewari, Christian Theobalt, Matthias Nießner
Neural Voice Puppetry has a variety of use-cases, including audio-driven video avatars, video dubbing, and text-driven video synthesis of a talking head.
no code implementations • 27 Nov 2019 • Davide Cozzolino, Justus Thies, Andreas Rössler, Matthias Nießner, Luisa Verdoliva
Given a GAN-generated image, we insert the traces of a specific camera model into it and deceive state-of-the-art detectors into believing the image was acquired by that model.
4 code implementations • 28 Apr 2019 • Justus Thies, Michael Zollhöfer, Matthias Nießner
Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.
14 code implementations • 25 Jan 2019 • Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner
In particular, the benchmark is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as prominent representatives for facial manipulations at random compression level and size.
Ranked #1 on DeepFake Detection on FaceForensics
no code implementations • 6 Dec 2018 • Davide Cozzolino, Justus Thies, Andreas Rössler, Christian Riess, Matthias Nießner, Luisa Verdoliva
We devise a learning-based forensic detector which adapts well to new domains, i. e., novel manipulation methods and can handle scenarios where only a handful of fake examples are available during training.
1 code implementation • CVPR 2019 • Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, Michael Zollhöfer
In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis.
no code implementations • 26 Nov 2018 • Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner
Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering.
no code implementations • 29 May 2018 • Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt
In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target.
no code implementations • 29 May 2018 • Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner
We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze.
no code implementations • 24 Mar 2018 • Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner
Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets.
no code implementations • 18 Dec 2017 • Justus Thies, Michael Zollhöfer, Matthias Nießner
We present IMU2Face, a gesture-driven facial reenactment system.
no code implementations • CVPR 2018 • Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt
In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created training corpus.
no code implementations • 11 Oct 2016 • Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner
Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances.