Search Results for author: Ayush Tewari

Found 37 papers, 11 papers with code

MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

no code implementations ICCV 2017 Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Pérez, Christian Theobalt

In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image.

Face Reconstruction Monocular Reconstruction

InverseFaceNet: Deep Monocular Inverse Face Rendering

no code implementations CVPR 2018 Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt

In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created training corpus.

Face Reconstruction Inverse Rendering

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

no code implementations CVPR 2018 Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, Christian Theobalt

To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model.

Face Model Monocular Reconstruction

A Hybrid Model for Identity Obfuscation by Face Replacement

no code implementations ECCV 2018 Qianru Sun, Ayush Tewari, Weipeng Xu, Mario Fritz, Christian Theobalt, Bernt Schiele

As more and more personal photos are shared and tagged in social media, avoiding privacy risks such as unintended recognition becomes increasingly challenging.

Face Generation

Deep Video Portraits

no code implementations29 May 2018 Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt

In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target.

Face Model

FML: Face Model Learning from Videos

no code implementations CVPR 2019 Ayush Tewari, Florian Bernard, Pablo Garrido, Gaurav Bharaj, Mohamed Elgharib, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, Christian Theobalt

In contrast, we propose multi-frame video-based self-supervised training of a deep network that (i) learns a face identity model both in shape and appearance while (ii) jointly learning to reconstruct 3D faces.

3D Reconstruction Face Model

EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

no code implementations26 May 2019 Mohamed Elgharib, Mallikarjun BR, Ayush Tewari, Hyeongwoo Kim, Wentao Liu, Hans-Peter Seidel, Christian Theobalt

Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments.

Text-based Editing of Talking-head Video

1 code implementation4 Jun 2019 Ohad Fried, Ayush Tewari, Michael Zollhöfer, Adam Finkelstein, Eli Shechtman, Dan B. Goldman, Kyle Genova, Zeyu Jin, Christian Theobalt, Maneesh Agrawala

To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material.

Face Model Sentence +3

Neural Voice Puppetry: Audio-driven Facial Reenactment

1 code implementation ECCV 2020 Justus Thies, Mohamed Elgharib, Ayush Tewari, Christian Theobalt, Matthias Nießner

Neural Voice Puppetry has a variety of use-cases, including audio-driven video avatars, video dubbing, and text-driven video synthesis of a talking head.

Face Model Neural Rendering +2

StyleRig: Rigging StyleGAN for 3D Control over Portrait Images

no code implementations CVPR 2020 Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, Christian Theobalt

StyleGAN generates photorealistic portrait images of faces with eyes, teeth, hair and context (neck, shoulders, background), but lacks a rig-like control over semantic face parameters that are interpretable in 3D, such as face pose, expressions, and scene illumination.

State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

BIG-bench Machine Learning Image Generation +2

Monocular Reconstruction of Neural Face Reflectance Fields

no code implementations CVPR 2021 Mallikarjun B R., Ayush Tewari, Tae-Hyun Oh, Tim Weyrich, Bernd Bickel, Hans-Peter Seidel, Hanspeter Pfister, Wojciech Matusik, Mohamed Elgharib, Christian Theobalt

The reflectance field of a face describes the reflectance properties responsible for complex lighting effects including diffuse, specular, inter-reflection and self shadowing.

Monocular Reconstruction

PIE: Portrait Image Embedding for Semantic Control

no code implementations20 Sep 2020 Ayush Tewari, Mohamed Elgharib, Mallikarjun B R., Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, Christian Theobalt

We present the first approach for embedding real portrait images in the latent space of StyleGAN, which allows for intuitive editing of the head pose, facial expression, and scene illumination in the image.

Face Model

Learning Complete 3D Morphable Face Models from Images and Videos

no code implementations CVPR 2021 Mallikarjun B R, Ayush Tewari, Hans-Peter Seidel, Mohamed Elgharib, Christian Theobalt

Our network design and loss functions ensure a disentangled parameterization of not only identity and albedo, but also, for the first time, an expression basis.

3D Face Reconstruction Monocular Reconstruction +1

i3DMM: Deep Implicit 3D Morphable Model of Human Heads

1 code implementation CVPR 2021 Tarun Yenamandra, Ayush Tewari, Florian Bernard, Hans-Peter Seidel, Mohamed Elgharib, Daniel Cremers, Christian Theobalt

Our approach has the following favorable properties: (i) It is the first full head morphable model that includes hair.

Monocular Real-time Full Body Capture with Inter-part Correlations

no code implementations CVPR 2021 Yuxiao Zhou, Marc Habermann, Ikhsanul Habibie, Ayush Tewari, Christian Theobalt, Feng Xu

We present the first method for real-time full body capture that estimates shape and motion of body and hands together with a dynamic 3D face model from a single color image.

3D Hand Pose Estimation Computational Efficiency +1

Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video

2 code implementations ICCV 2021 Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, Christian Theobalt

We show that a single handheld consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views, e. g. a `bullet-time' video effect.

Novel View Synthesis Video Editing

StyleVideoGAN: A Temporal Generative Model using a Pretrained StyleGAN

no code implementations15 Jul 2021 Gereon Fox, Ayush Tewari, Mohamed Elgharib, Christian Theobalt

We demonstrate that it suffices to train our temporal architecture on only 10 minutes of footage of 1 subject for about 6 hours.

Disentangled3D: Learning a 3D Generative Model with Disentangled Geometry and Appearance from Monocular Images

no code implementations CVPR 2022 Ayush Tewari, Mallikarjun B R, Xingang Pan, Ohad Fried, Maneesh Agrawala, Christian Theobalt

Our model can disentangle the geometry and appearance variations in the scene, i. e., we can independently sample from the geometry and appearance spaces of the generative model.

Disentanglement

GAN2X: Non-Lambertian Inverse Rendering of Image GANs

no code implementations18 Jun 2022 Xingang Pan, Ayush Tewari, Lingjie Liu, Christian Theobalt

2D images are observations of the 3D physical world depicted with the geometry, material, and illumination components.

3D Face Reconstruction Inverse Rendering

Neural Groundplans: Persistent Neural Scene Representations from a Single Image

no code implementations22 Jul 2022 Prafull Sharma, Ayush Tewari, Yilun Du, Sergey Zakharov, Rares Ambrus, Adrien Gaidon, William T. Freeman, Fredo Durand, Joshua B. Tenenbaum, Vincent Sitzmann

We present a method to map 2D image observations of a scene to a persistent 3D scene representation, enabling novel view synthesis and disentangled representation of the movable and immovable components of the scene.

Disentanglement Instance Segmentation +4

Neural Radiance Transfer Fields for Relightable Novel-view Synthesis with Global Illumination

no code implementations27 Jul 2022 Linjie Lyu, Ayush Tewari, Thomas Leimkuehler, Marc Habermann, Christian Theobalt

Given a set of images of a scene, the re-rendering of this scene from novel views and lighting conditions is an important and challenging problem in Computer Vision and Graphics.

Disentanglement Novel View Synthesis

gCoRF: Generative Compositional Radiance Fields

no code implementations31 Oct 2022 Mallikarjun BR, Ayush Tewari, Xingang Pan, Mohamed Elgharib, Christian Theobalt

We start with a global generative model (GAN) and learn to decompose it into different semantic parts using supervision from 2D segmentation masks.

Image Generation

ConceptFusion: Open-set Multimodal 3D Mapping

1 code implementation14 Feb 2023 Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, Mohd Omama, Tao Chen, Alaa Maalouf, Shuang Li, Ganesh Iyer, Soroush Saryazdi, Nikhil Keetha, Ayush Tewari, Joshua B. Tenenbaum, Celso Miguel de Melo, Madhava Krishna, Liam Paull, Florian Shkurti, Antonio Torralba

ConceptFusion leverages the open-set capabilities of today's foundation models pre-trained on internet-scale data to reason about concepts across modalities such as natural language, images, and audio.

Autonomous Driving Robot Navigation

Learning to Render Novel Views from Wide-Baseline Stereo Pairs

1 code implementation CVPR 2023 Yilun Du, Cameron Smith, Ayush Tewari, Vincent Sitzmann

We conduct extensive comparisons on held-out test scenes across two real-world datasets, significantly outperforming prior work on novel view synthesis from sparse image observations and achieving multi-view-consistent novel view synthesis.

Novel View Synthesis

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

5 code implementations18 May 2023 Xingang Pan, Ayush Tewari, Thomas Leimkühler, Lingjie Liu, Abhimitra Meka, Christian Theobalt

Synthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects.

Image Manipulation Point Tracking +1

AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars

no code implementations1 Jun 2023 Mohit Mendiratta, Xingang Pan, Mohamed Elgharib, Kartik Teotia, Mallikarjun B R, Ayush Tewari, Vladislav Golyanik, Adam Kortylewski, Christian Theobalt

Our method edits the full head in a canonical space, and then propagates these edits to remaining time steps via a pretrained deformation network.

Approaching human 3D shape perception with neurally mappable models

no code implementations22 Aug 2023 Thomas P. O'Connell, Tyler Bonnen, Yoni Friedman, Ayush Tewari, Josh B. Tenenbaum, Vincent Sitzmann, Nancy Kanwisher

Finally, we find that while the models trained with multi-view learning objectives are able to partially generalize to new object categories, they fall short of human alignment.

MULTI-VIEW LEARNING

Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering

1 code implementation30 Sep 2023 Linjie Lyu, Ayush Tewari, Marc Habermann, Shunsuke Saito, Michael Zollhöfer, Thomas Leimkühler, Christian Theobalt

We further conduct an extensive comparative study of different priors on illumination used in previous work on inverse rendering.

Denoising Inverse Rendering

Cannot find the paper you are looking for? You can Submit a new open access paper.