no code implementations • 22 Mar 2024 • Raza Yunus, Jan Eric Lenssen, Michael Niemeyer, Yiyi Liao, Christian Rupprecht, Christian Theobalt, Gerard Pons-Moll, Jia-Bin Huang, Vladislav Golyanik, Eddy Ilg
Reconstructing models of the real world, including 3D geometry, appearance, and motion of real scenes, is essential for computer graphics and computer vision.
no code implementations • 20 Mar 2024 • Michael Niemeyer, Fabian Manhardt, Marie-Julie Rakotosaona, Michael Oechsle, Daniel Duckworth, Rama Gosula, Keisuke Tateno, John Bates, Dominik Kaeser, Federico Tombari
First, we use radiance fields as a prior and supervision signal for optimizing point-based scene representations, leading to improved quality and more robust optimization.
no code implementations • 10 Jan 2024 • Mohamad Shahbazi, Liesbeth Claessens, Michael Niemeyer, Edo Collins, Alessio Tonioni, Luc van Gool, Federico Tombari
We introduce InseRF, a novel method for generative object insertion in the NeRF reconstructions of 3D scenes.
no code implementations • 20 Dec 2023 • Fangjinhua Wang, Marie-Julie Rakotosaona, Michael Niemeyer, Richard Szeliski, Marc Pollefeys, Federico Tombari
In this work, we propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
no code implementations • 30 Nov 2023 • Kunyi Li, Michael Niemeyer, Nassir Navab, Federico Tombari
In this work, we introduce DNS SLAM, a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
1 code implementation • 24 Apr 2023 • Christina Tsalicoglou, Fabian Manhardt, Alessio Tonioni, Michael Niemeyer, Federico Tombari
In addition, we propose a novel way to finetune the mesh texture, removing the effect of high saturation and improving the details of the output 3D mesh.
no code implementations • 23 Mar 2023 • Hidenobu Matsuki, Keisuke Tateno, Michael Niemeyer, Federico Tombari
However, in real-time and on-the-fly scene capture applications, this prior knowledge cannot be assumed as fixed or static, since it dynamically changes and it is subject to significant updates based on run-time observations.
no code implementations • ICCV 2023 • Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein, Jonathan Barron, Yuanzhen Li, Varun Jampani
We present DreamBooth3D, an approach to personalize text-to-3D generative models from as few as 3-6 casually captured images of a subject.
no code implementations • 16 Mar 2023 • Marie-Julie Rakotosaona, Fabian Manhardt, Diego Martin Arroyo, Michael Niemeyer, Abhijit Kundu, Federico Tombari
Obtaining 3D meshes from neural radiance fields still remains an open challenge since NeRFs are optimized for view synthesis, not enforcing an accurate underlying geometry on the radiance field.
1 code implementation • 15 Jun 2022 • Katja Schwarz, Axel Sauer, Michael Niemeyer, Yiyi Liao, Andreas Geiger
State-of-the-art 3D-aware generative models rely on coordinate-based MLPs to parameterize 3D radiance fields.
1 code implementation • 1 Jun 2022 • Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, Andreas Geiger
Motivated by recent advances in the area of monocular geometry prediction, we systematically explore the utility these cues provide for improving neural implicit surface reconstruction.
no code implementations • CVPR 2022 • Michael Niemeyer, Jonathan T. Barron, Ben Mildenhall, Mehdi S. M. Sajjadi, Andreas Geiger, Noha Radwan
We observe that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training.
1 code implementation • NeurIPS 2021 • Songyou Peng, Chiyu "Max" Jiang, Yiyi Liao, Michael Niemeyer, Marc Pollefeys, Andreas Geiger
However, the implicit nature of neural implicit representations results in slow inference time and requires careful initialization.
no code implementations • 31 Mar 2021 • Michael Niemeyer, Andreas Geiger
At test time, our model generates images with explicit control over the camera as well as the shape and appearance of the scene.
1 code implementation • CVPR 2021 • Michael Niemeyer, Andreas Geiger
While several recent works investigate how to disentangle underlying factors of variation in the data, most of them operate in 2D and hence ignore that our world is three-dimensional.
1 code implementation • NeurIPS 2020 • Katja Schwarz, Yiyi Liao, Michael Niemeyer, Andreas Geiger
In contrast to voxel-based representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity.
Ranked #2 on Scene Generation on VizDoom
3 code implementations • 27 Mar 2020 • Michael Oechsle, Michael Niemeyer, Lars Mescheder, Thilo Strauss, Andreas Geiger
In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field.
6 code implementations • ECCV 2020 • Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, Andreas Geiger
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction.
1 code implementation • CVPR 2020 • Michael Niemeyer, Lars Mescheder, Michael Oechsle, Andreas Geiger
In this work, we propose a differentiable rendering formulation for implicit shape and texture representations.
no code implementations • ICCV 2019 • Michael Niemeyer, Lars Mescheder, Michael Oechsle, Andreas Geiger
In order to perform dense 4D reconstruction from images or sparse point clouds, we combine our method with a continuous 3D representation.
no code implementations • ICCV 2019 • Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, Andreas Geiger
A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques.
7 code implementations • CVPR 2019 • Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, Andreas Geiger
With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity.