Search Results for author: Norman Müller

Found 8 papers, 2 papers with code

ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models

1 code implementation4 Mar 2024 Lukas Höllein, Aljaž Božič, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, Matthias Nießner

In this paper, we present a method that leverages pretrained text-to-image models as a prior, and learn to generate multi-view images in a single denoising process from real-world data.

Denoising Image Generation +1

GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields

no code implementations9 Jun 2023 Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner

Neural Radiance Fields (NeRF) have shown impressive novel view synthesis results; nonetheless, even thorough recordings yield imperfections in reconstructions, for instance due to poorly observed areas or minor lighting changes.

3D Scene Reconstruction Novel View Synthesis

DiffRF: Rendering-Guided 3D Radiance Field Diffusion

no code implementations CVPR 2023 Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner

We introduce DiffRF, a novel approach for 3D radiance field synthesis based on denoising diffusion probabilistic models.

Denoising

3D Multi-Object Tracking with Differentiable Pose Estimation

no code implementations28 Jun 2022 Dominik Schmauser, Zeju Qiu, Norman Müller, Matthias Nießner

We propose a novel approach for joint 3D multi-object tracking and reconstruction from RGB-D sequences in indoor environments.

3D Multi-Object Tracking Object +1

AutoRF: Learning 3D Object Radiance Fields from Single View Observations

no code implementations CVPR 2022 Norman Müller, Andrea Simonelli, Lorenzo Porzi, Samuel Rota Bulò, Matthias Nießner, Peter Kontschieder

We introduce AutoRF - a new approach for learning neural 3D object representations where each object in the training set is observed by only a single view.

Novel View Synthesis Object

Seeing Behind Objects for 3D Multi-Object Tracking in RGB-D Sequences

no code implementations CVPR 2021 Norman Müller, Yu-Shiang Wong, Niloy J. Mitra, Angela Dai, Matthias Nießner

From a sequence of RGB-D frames, we detect objects in each frame and learn to predict their complete object geometry as well as a dense correspondence mapping into a canonical space.

3D Multi-Object Tracking Object

Cannot find the paper you are looking for? You can Submit a new open access paper.