no code implementations • CVPR 2023 • Ziyu Wan, Christian Richardt, Aljaž Božič, Chao Li, Vijay Rengarajan, Seonghyeon Nam, Xiaoyu Xiang, Tuotuo Li, Bo Zhu, Rakesh Ranjan, Jing Liao
Neural radiance fields (NeRFs) enable novel view synthesis with unprecedented visual quality.
1 code implementation • CVPR 2023 • Benjamin Attal, Jia-Bin Huang, Christian Richardt, Michael Zollhoefer, Johannes Kopf, Matthew O'Toole, Changil Kim
Volumetric scene representations enable photorealistic view synthesis for static scenes and form the basis of several existing 6-DoF video techniques.
Ranked #1 on
Novel View Synthesis
on DONeRF: Evaluation Dataset
no code implementations • ICCV 2023 • Aarrushi Shandilya, Benjamin Attal, Christian Richardt, James Tompkin, Matthew O'Toole
We present an image formation model and optimization procedure that combines the advantages of neural radiance fields and structured light imaging.
1 code implementation • CVPR 2022 • Manuel Rey-Area, Mingze Yuan, Christian Richardt
360deg cameras can capture complete environments in a single shot, which makes 360deg imagery alluring in many computer vision tasks.
1 code implementation • 28 Dec 2021 • Mingze Yuan, Christian Richardt
Our method leverages gnomonic projection to locally convert ERP images to perspective images, and uniformly samples the ERP image by projection to a cubemap and regular icosahedron vertices, to incrementally refine the estimated 360{\deg} flow fields even in the presence of large rotations.
1 code implementation • 30 Nov 2021 • Manuel Rey-Area, Mingze Yuan, Christian Richardt
360{\deg} cameras can capture complete environments in a single shot, which makes 360{\deg} imagery alluring in many computer vision tasks.
1 code implementation • NeurIPS 2021 • Benjamin Attal, Eliot Laidlaw, Aaron Gokaslan, Changil Kim, Christian Richardt, James Tompkin, Matthew O'Toole
Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e. g., NeRF).
1 code implementation • ECCV 2020 • Benjamin Attal, Selena Ling, Aaron Gokaslan, Christian Richardt, James Tompkin
Our approach is to simultaneously learn depth and disocclusions via a multi-sphere image representation, which can be rendered with correct 6DoF disparity and motion parallax in VR.
no code implementations • ECCV 2020 • Kwang In Kim, Christian Richardt, Hyung Jin Chang
Predictor combination aims to improve a (target) predictor of a learning task based on the (reference) predictors of potentially relevant tasks, without having access to the internals of individual predictors.
1 code implementation • NeurIPS 2020 • Thu Nguyen-Phuoc, Christian Richardt, Long Mai, Yong-Liang Yang, Niloy Mitra
Our experiments show that using explicit 3D features to represent objects allows BlockGAN to learn disentangled representations both in terms of objects (foreground and background) and their properties (pose and identity).
no code implementations • 5 Sep 2019 • Hyeongwoo Kim, Mohamed Elgharib, Michael Zollhöfer, Hans-Peter Seidel, Thabo Beeler, Christian Richardt, Christian Theobalt
We present a style-preserving visual dubbing approach from single video inputs, which maintains the signature style of target actors when modifying facial expressions, including mouth motions, to match foreign languages.
no code implementations • 6 Aug 2019 • Abhimitra Meka, Mohammad Shafiei, Michael Zollhoefer, Christian Richardt, Christian Theobalt
We propose the first approach for the decomposition of a monocular color video into direct and indirect illumination components in real time.
3 code implementations • ICCV 2019 • Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang
This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.
1 code implementation • NeurIPS 2018 • Youssef Alami Mejjati, Christian Richardt, James Tompkin, Darren Cosker, Kwang In Kim
Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene.
2 code implementations • 6 Jun 2018 • Youssef A. Mejjati, Christian Richardt, James Tompkin, Darren Cosker, Kwang In Kim
Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene.
no code implementations • 29 May 2018 • Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt
In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target.
no code implementations • CVPR 2018 • Abhimitra Meka, Maxim Maximov, Michael Zollhoefer, Avishek Chatterjee, Hans-Peter Seidel, Christian Richardt, Christian Theobalt
We present the first end to end approach for real time material estimation for general object shapes with uniform material that only requires a single color image as input.
no code implementations • ICCV 2017 • Kwang In Kim, James Tompkin, Christian Richardt
We present an algorithm for test-time combination of a set of reference predictors with unknown parametric forms.
no code implementations • CVPR 2018 • Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt
In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created training corpus.
no code implementations • 31 Dec 2016 • Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt
Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center.
no code implementations • 23 Oct 2016 • Lucas Thies, Michael Zollhöfer, Christian Richardt, Christian Theobalt, Günther Greiner
Our extensive experiments and evaluations show that our approach produces high-quality dense reconstructions of 3D geometry and scene flow at real-time frame rates, and compares favorably to the state of the art.
no code implementations • 12 Oct 2016 • Hyeongwoo Kim, Christian Richardt, Christian Theobalt
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available.
no code implementations • 23 Sep 2016 • Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt
We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual reality headset.
no code implementations • 16 Sep 2016 • Christian Richardt, Hyeongwoo Kim, Levi Valgaerts, Christian Theobalt
We finally refine the computed correspondence fields in a variational scene flow formulation.
no code implementations • 28 Jul 2016 • Helge Rhodin, Nadia Robertini, Dan Casas, Christian Richardt, Hans-Peter Seidel, Christian Theobalt
Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy.
no code implementations • ICCV 2015 • Helge Rhodin, Nadia Robertini, Christian Richardt, Hans-Peter Seidel, Christian Theobalt
Generative reconstruction methods compute the 3D configuration (such as pose and/or geometry) of a shape by optimizing the overlap of the projected 3D shape model with images.
no code implementations • CVPR 2013 • Christian Richardt, Yael Pritch, Henning Zimmer, Alexander Sorkine-Hornung
As our first contribution, we describe the necessary correction steps and a compact representation for the input images in order to achieve a highly accurate approximation to the required ray space.