Search Results for author: Michael Niemeyer

Found 22 papers, 10 papers with code

Recent Trends in 3D Reconstruction of General Non-Rigid Scenes

no code implementations22 Mar 2024 Raza Yunus, Jan Eric Lenssen, Michael Niemeyer, Yiyi Liao, Christian Rupprecht, Christian Theobalt, Gerard Pons-Moll, Jia-Bin Huang, Vladislav Golyanik, Eddy Ilg

Reconstructing models of the real world, including 3D geometry, appearance, and motion of real scenes, is essential for computer graphics and computer vision.

3D Reconstruction Navigate

RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS

no code implementations20 Mar 2024 Michael Niemeyer, Fabian Manhardt, Marie-Julie Rakotosaona, Michael Oechsle, Daniel Duckworth, Rama Gosula, Keisuke Tateno, John Bates, Dominik Kaeser, Federico Tombari

First, we use radiance fields as a prior and supervision signal for optimizing point-based scene representations, leading to improved quality and more robust optimization.

DNS SLAM: Dense Neural Semantic-Informed SLAM

no code implementations30 Nov 2023 Kunyi Li, Michael Niemeyer, Nassir Navab, Federico Tombari

In this work, we introduce DNS SLAM, a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.

Semantic SLAM

TextMesh: Generation of Realistic 3D Meshes From Text Prompts

1 code implementation24 Apr 2023 Christina Tsalicoglou, Fabian Manhardt, Alessio Tonioni, Michael Niemeyer, Federico Tombari

In addition, we propose a novel way to finetune the mesh texture, removing the effect of high saturation and improving the details of the output 3D mesh.

NEWTON: Neural View-Centric Mapping for On-the-Fly Large-Scale SLAM

no code implementations23 Mar 2023 Hidenobu Matsuki, Keisuke Tateno, Michael Niemeyer, Federico Tombari

However, in real-time and on-the-fly scene capture applications, this prior knowledge cannot be assumed as fixed or static, since it dynamically changes and it is subject to significant updates based on run-time observations.

NeRFMeshing: Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes

no code implementations16 Mar 2023 Marie-Julie Rakotosaona, Fabian Manhardt, Diego Martin Arroyo, Michael Niemeyer, Abhijit Kundu, Federico Tombari

Obtaining 3D meshes from neural radiance fields still remains an open challenge since NeRFs are optimized for view synthesis, not enforcing an accurate underlying geometry on the radiance field.

Novel View Synthesis Surface Reconstruction

VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel Grids

1 code implementation15 Jun 2022 Katja Schwarz, Axel Sauer, Michael Niemeyer, Yiyi Liao, Andreas Geiger

State-of-the-art 3D-aware generative models rely on coordinate-based MLPs to parameterize 3D radiance fields.

3D-Aware Image Synthesis Neural Rendering +1

MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction

1 code implementation1 Jun 2022 Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, Andreas Geiger

Motivated by recent advances in the area of monocular geometry prediction, we systematically explore the utility these cues provide for improving neural implicit surface reconstruction.

3D Reconstruction Multi-View 3D Reconstruction +1

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

no code implementations CVPR 2022 Michael Niemeyer, Jonathan T. Barron, Ben Mildenhall, Mehdi S. M. Sajjadi, Andreas Geiger, Noha Radwan

We observe that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training.

Novel View Synthesis

Shape As Points: A Differentiable Poisson Solver

1 code implementation NeurIPS 2021 Songyou Peng, Chiyu "Max" Jiang, Yiyi Liao, Michael Niemeyer, Marc Pollefeys, Andreas Geiger

However, the implicit nature of neural implicit representations results in slow inference time and requires careful initialization.

3D Reconstruction Surface Reconstruction

CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields

no code implementations31 Mar 2021 Michael Niemeyer, Andreas Geiger

At test time, our model generates images with explicit control over the camera as well as the shape and appearance of the scene.

3D-Aware Image Synthesis

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields

1 code implementation CVPR 2021 Michael Niemeyer, Andreas Geiger

While several recent works investigate how to disentangle underlying factors of variation in the data, most of them operate in 2D and hence ignore that our world is three-dimensional.

Image Generation Neural Rendering

GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

1 code implementation NeurIPS 2020 Katja Schwarz, Yiyi Liao, Michael Niemeyer, Andreas Geiger

In contrast to voxel-based representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity.

3D-Aware Image Synthesis Novel View Synthesis +1

Learning Implicit Surface Light Fields

3 code implementations27 Mar 2020 Michael Oechsle, Michael Niemeyer, Lars Mescheder, Thilo Strauss, Andreas Geiger

In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field.

3D Reconstruction Image Generation +1

Convolutional Occupancy Networks

6 code implementations ECCV 2020 Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, Andreas Geiger

Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction.

3D Reconstruction

Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics

no code implementations ICCV 2019 Michael Niemeyer, Lars Mescheder, Michael Oechsle, Andreas Geiger

In order to perform dense 4D reconstruction from images or sparse point clouds, we combine our method with a continuous 3D representation.

3D Reconstruction 4D reconstruction

Texture Fields: Learning Texture Representations in Function Space

no code implementations ICCV 2019 Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, Andreas Geiger

A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques.

Cannot find the paper you are looking for? You can Submit a new open access paper.