Search Results for author: Justus Thies

Found 57 papers, 24 papers with code

Generating Human Interaction Motions in Scenes with Text Control

no code implementations16 Apr 2024 Hongwei Yi, Justus Thies, Michael J. Black, Xue Bin Peng, Davis Rempe

Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model, emphasizing goal-reaching constraints on large-scale motion-capture datasets.

Denoising Human-Object Interaction Detection +1

Environment-Specific People

no code implementations22 Dec 2023 Mirela Ostrek, Soubhik Sanyal, Carol O'Sullivan, Michael J. Black, Justus Thies

The method is analyzed quantitatively and qualitatively, and we show that ESP outperforms state-of-the-art on the task of contextual full-body generation.

Image Generation

FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models

1 code implementation13 Dec 2023 Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nießner

We propose a new latent diffusion model for this task, operating in the expression space of neural parametric head models, to synthesize audio-driven realistic head sequences.

3D Face Animation Audio Synthesis +1

360° Volumetric Portrait Avatar

no code implementations8 Dec 2023 Jalees Nehvi, Berna Kabadayi, Julien Valentin, Justus Thies

In contrast to this, we propose a template-based tracking of the torso, head and facial expressions which allows us to cover the appearance of a human subject from all sides.

Human Parsing Monocular Reconstruction

DPHMs: Diffusion Parametric Head Models for Depth-based Tracking

no code implementations2 Dec 2023 Jiapeng Tang, Angela Dai, Yinyu Nie, Lev Markhasin, Justus Thies, Matthias Niessner

We introduce Diffusion Parametric Head Models (DPHMs), a generative model that enables robust volumetric head reconstruction and tracking from monocular depth sequences.

3DiFACE: Diffusion-based Speech-driven 3D Facial Animation and Editing

no code implementations1 Dec 2023 Balamurugan Thambiraja, Sadegh Aliakbarian, Darren Cosker, Justus Thies

To enable stochasticity as well as motion editing, we propose a lightweight audio-conditioned diffusion model for 3D facial motion.

GAN-Avatar: Controllable Personalized GAN-based Human Head Avatar

no code implementations22 Nov 2023 Berna Kabadayi, Wojciech Zielonka, Bharat Lal Bhatnagar, Gerard Pons-Moll, Justus Thies

For controlling the model, we learn a mapping from 3DMM facial expression parameters to the latent space of the generative model.

Image Generation

Drivable 3D Gaussian Avatars

no code implementations14 Nov 2023 Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, Javier Romero

We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats.

Text-Guided Generation and Editing of Compositional 3D Avatars

no code implementations13 Sep 2023 Hao Zhang, Yao Feng, Peter Kulits, Yandong Wen, Justus Thies, Michael J. Black

We argue that existing methods are limited because they employ a monolithic modeling approach, using a single representation for the head, face, hair, and accessories.

text-guided-generation Virtual Try-on

TADA! Text to Animatable Digital Avatars

no code implementations21 Aug 2023 Tingting Liao, Hongwei Yi, Yuliang Xiu, Jiaxaing Tang, Yangyi Huang, Justus Thies, Michael J. Black

We introduce TADA, a simple-yet-effective approach that takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures, that can be animated and rendered with traditional graphics pipelines.

TeCH: Text-guided Reconstruction of Lifelike Clothed Humans

1 code implementation16 Aug 2023 Yangyi Huang, Hongwei Yi, Yuliang Xiu, Tingting Liao, Jiaxiang Tang, Deng Cai, Justus Thies

But how to effectively capture all visual attributes of an individual from a single image, which are sufficient to reconstruct unseen areas (e. g., the back view)?

Descriptive Question Answering +1

CaPhy: Capturing Physical Properties for Animatable Human Avatars

no code implementations ICCV 2023 Zhaoqi Su, Liangxiao Hu, Siyou Lin, Hongwen Zhang, Shengping Zhang, Justus Thies, Yebin Liu

In contrast to previous work on 3D avatar reconstruction, our method is able to generalize to novel poses with realistic dynamic cloth deformations.

DiffuScene: Denoising Diffusion Models for Generative Indoor Scene Synthesis

no code implementations24 Mar 2023 Jiapeng Tang, Yinyu Nie, Lev Markhasin, Angela Dai, Justus Thies, Matthias Nießner

We introduce a diffusion network to synthesize a collection of 3D indoor objects by denoising a set of unordered object attributes.

Denoising Indoor Scene Synthesis +1

Imitator: Personalized Speech-driven 3D Facial Animation

no code implementations ICCV 2023 Balamurugan Thambiraja, Ikhsanul Habibie, Sadegh Aliakbarian, Darren Cosker, Christian Theobalt, Justus Thies

To address this, we present Imitator, a speech-driven facial expression synthesis method, which learns identity-specific details from a short input video and produces novel facial expressions matching the identity-specific speaking style and facial idiosyncrasies of the target actor.

MIME: Human-Aware 3D Scene Generation

no code implementations CVPR 2023 Hongwei Yi, Chun-Hao P. Huang, Shashank Tripathi, Lea Hering, Justus Thies, Michael J. Black

We propose MIME (Mining Interaction and Movement to infer 3D Environments), which is a generative model of indoor scenes that produces furniture layouts that are consistent with the human movement.

2D Semantic Segmentation task 1 (8 classes) 3D Semantic Scene Completion +2

ClipFace: Text-guided Editing of Textured 3D Morphable Models

1 code implementation2 Dec 2022 Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nießner

Controllable editing and manipulation are given by language prompts to adapt texture and expression of the 3D morphable model.

Texture Synthesis

High-Res Facial Appearance Capture from Polarized Smartphone Images

no code implementations CVPR 2023 Dejan Azinović, Olivier Maury, Christophe Hery, Mathias Nießner, Justus Thies

We propose a novel method for high-quality facial texture reconstruction from RGB images using a novel capturing routine based on a single smartphone which we equip with an inexpensive polarization foil.

Vocal Bursts Intensity Prediction

Instant Volumetric Head Avatars

1 code implementation CVPR 2023 Wojciech Zielonka, Timo Bolkart, Justus Thies

In addition, it allows for the interactive rendering of novel poses and expressions.

Face Model

Neural Shape Deformation Priors

no code implementations11 Oct 2022 Jiapeng Tang, Lev Markhasin, Bi Wang, Justus Thies, Matthias Nießner

To this end, we introduce transformer-based deformation networks that represent a shape deformation as a composition of local surface deformations.

Towards Metrical Reconstruction of Human Faces

1 code implementation13 Apr 2022 Wojciech Zielonka, Timo Bolkart, Justus Thies

To this end, we take advantage of a face recognition network pretrained on a large-scale 2D image dataset, which provides distinct features for different faces and is robust to expression, illumination, and camera changes.

2k 3D Face Reconstruction +1

Texturify: Generating Textures on 3D Shape Surfaces

no code implementations5 Apr 2022 Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai

Texture cues on 3D objects are key to compelling visual representations, with the possibility to create high visual fidelity with inherent spatial consistency across different views.

Human-Aware Object Placement for Visual Environment Reconstruction

1 code implementation CVPR 2022 Hongwei Yi, Chun-Hao P. Huang, Dimitrios Tzionas, Muhammed Kocabas, Mohamed Hassan, Siyu Tang, Justus Thies, Michael J. Black

In fact, we demonstrate that these human-scene interactions (HSIs) can be leveraged to improve the 3D reconstruction of a scene from a monocular RGB video.

3D Reconstruction Object

Neural Head Avatars from Monocular RGB Videos

no code implementations CVPR 2022 Philip-William Grassal, Malte Prinzler, Titus Leistner, Carsten Rother, Matthias Nießner, Justus Thies

We present Neural Head Avatars, a novel neural representation that explicitly models the surface geometry and appearance of an animatable human avatar that can be used for teleconferencing in AR/VR or other applications in the movie or games industry that rely on a digital human.

Novel View Synthesis

Neural RGB-D Surface Reconstruction

2 code implementations CVPR 2022 Dejan Azinović, Ricardo Martin-Brualla, Dan B Goldman, Matthias Nießner, Justus Thies

Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR.

Image Generation Mixed Reality +2

Dynamic Surface Function Networks for Clothed Human Bodies

1 code implementation ICCV 2021 Andrei Burov, Matthias Nießner, Justus Thies

To this end, we explicitly model the surface of the person using a multi-layer perceptron (MLP) which is embedded into the canonical space of the SMPL body model.

NPMs: Neural Parametric Models for 3D Deformable Shapes

1 code implementation ICCV 2021 Pablo Palafox, Aljaž Božič, Justus Thies, Matthias Nießner, Angela Dai

Crucially, once learned, our neural parametric models of shape and pose enable optimization over the learned spaces to fit to new observations, similar to the fitting of a traditional parametric model, e. g., SMPL.

Pose Transfer

RetrievalFuse: Neural 3D Scene Reconstruction with a Database

1 code implementation ICCV 2021 Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai

3D reconstruction of large scenes is a challenging problem due to the high-complexity nature of the solution space, in particular for generative neural networks.

3D Reconstruction 3D Scene Reconstruction +3

ID-Reveal: Identity-aware DeepFake Video Detection

1 code implementation ICCV 2021 Davide Cozzolino, Andreas Rössler, Justus Thies, Matthias Nießner, Luisa Verdoliva

A major challenge in DeepFake forgery detection is that state-of-the-art algorithms are mostly trained to detect a specific fake method.

Face Swapping Metric Learning

Face2Face: Real-time Face Capture and Reenactment of RGB Videos

2 code implementations CVPR 2016 Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner

Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion.

Learning Adaptive Sampling and Reconstruction for Volume Visualization

1 code implementation20 Jul 2020 Sebastian Weiss, Mustafa Işık, Justus Thies, Rüdiger Westermann

A central challenge in data visualization is to understand which data samples are required to generate an image of a data set in which the relevant information is encoded.

Data Visualization Neural Rendering

Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition

no code implementations29 Jun 2020 Hassan Abu Alhaija, Siva Karthik Mustikovela, Justus Thies, Varun Jampani, Matthias Nießner, Andreas Geiger, Carsten Rother

Neural rendering techniques promise efficient photo-realistic image synthesis while at the same time providing rich control over scene parameters by learning the physical image formation process.

Image-to-Image Translation Intrinsic Image Decomposition +1

SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans

1 code implementation CVPR 2021 Angela Dai, Yawar Siddiqui, Justus Thies, Julien Valentin, Matthias Nießner

We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion.

3D Reconstruction Scene Generation

Neural Non-Rigid Tracking

1 code implementation NeurIPS 2020 Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Angela Dai, Justus Thies, Matthias Nießner

We introduce a novel, end-to-end learnable, differentiable non-rigid tracker that enables state-of-the-art non-rigid reconstruction by a learned robust optimization.

State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

BIG-bench Machine Learning Image Generation +2

Adversarial Texture Optimization from RGB-D Scans

1 code implementation CVPR 2020 Jingwei Huang, Justus Thies, Angela Dai, Abhijit Kundu, Chiyu Max Jiang, Leonidas Guibas, Matthias Nießner, Thomas Funkhouser

In this work, we present a novel approach for color texture generation using a conditional adversarial loss obtained from weakly-supervised views.

Surface Reconstruction Texture Synthesis

Image-guided Neural Object Rendering

no code implementations ICLR 2020 Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner

Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering.

Image Generation Object

Neural Voice Puppetry: Audio-driven Facial Reenactment

1 code implementation ECCV 2020 Justus Thies, Mohamed Elgharib, Ayush Tewari, Christian Theobalt, Matthias Nießner

Neural Voice Puppetry has a variety of use-cases, including audio-driven video avatars, video dubbing, and text-driven video synthesis of a talking head.

Face Model Neural Rendering +2

SpoC: Spoofing Camera Fingerprints

no code implementations27 Nov 2019 Davide Cozzolino, Justus Thies, Andreas Rössler, Matthias Nießner, Luisa Verdoliva

Given a GAN-generated image, we insert the traces of a specific camera model into it and deceive state-of-the-art detectors into believing the image was acquired by that model.

Demosaicking Misinformation

Deferred Neural Rendering: Image Synthesis using Neural Textures

4 code implementations28 Apr 2019 Justus Thies, Michael Zollhöfer, Matthias Nießner

Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.

Image Generation Neural Rendering +1

FaceForensics++: Learning to Detect Manipulated Facial Images

14 code implementations25 Jan 2019 Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner

In particular, the benchmark is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as prominent representatives for facial manipulations at random compression level and size.

DeepFake Detection Face Swapping +2

ForensicTransfer: Weakly-supervised Domain Adaptation for Forgery Detection

no code implementations6 Dec 2018 Davide Cozzolino, Justus Thies, Andreas Rössler, Christian Riess, Matthias Nießner, Luisa Verdoliva

We devise a learning-based forensic detector which adapts well to new domains, i. e., novel manipulation methods and can handle scenarios where only a handful of fake examples are available during training.

Domain Adaptation

DeepVoxels: Learning Persistent 3D Feature Embeddings

1 code implementation CVPR 2019 Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, Michael Zollhöfer

In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis.

3D Reconstruction Novel View Synthesis

IGNOR: Image-guided Neural Object Rendering

no code implementations26 Nov 2018 Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner

Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering.

Image Generation Novel View Synthesis +1

Deep Video Portraits

no code implementations29 May 2018 Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt

In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target.

Face Model

HeadOn: Real-time Reenactment of Human Portrait Videos

no code implementations29 May 2018 Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Nießner

We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze.

InverseFaceNet: Deep Monocular Inverse Face Rendering

no code implementations CVPR 2018 Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt

In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created training corpus.

Face Reconstruction Inverse Rendering

FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality

no code implementations11 Oct 2016 Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner

Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances.

Cannot find the paper you are looking for? You can Submit a new open access paper.