Search Results for author: Norman Müller

Found 16 papers, 2 papers with code

Easy3D: A Simple Yet Effective Method for 3D Interactive Segmentation

no code implementations15 Apr 2025 Andrea Simonelli, Norman Müller, Peter Kontschieder

The increasing availability of digital 3D environments, whether through image-based 3D reconstruction, generation, or scans obtained by robots, is driving innovation across various applications.

3D Reconstruction Decoder +1

Fillerbuster: Multi-View Scene Completion for Casual Captures

no code implementations7 Feb 2025 Ethan Weber, Norman Müller, Yash Kant, Vasu Agrawal, Michael Zollhöfer, Angjoo Kanazawa, Christian Richardt

Our solution is to train a generative model that can consume a large context of input frames while generating unknown target views and recovering image poses when desired.

Coherent 3D Scene Diffusion From a Single RGB Image

no code implementations13 Dec 2024 Manuel Dahnert, Angela Dai, Norman Müller, Matthias Nießner

Motivated by the ill-posed nature of the task and to obtain consistent scene reconstruction results, we learn a generative scene prior by conditioning on all scene objects simultaneously to capture the scene context and by allowing the model to learn inter-object relationships throughout the diffusion process.

3D Scene Reconstruction

Multi-view Image Diffusion via Coordinate Noise and Fourier Attention

no code implementations4 Dec 2024 Justin Theiss, Norman Müller, Daeil Kim, Aayush Prakash

Recently, text-to-image generation with diffusion models has made significant advancements in both higher fidelity and generalization capabilities compared to previous baselines.

Text-to-Image Generation

L3DG: Latent 3D Gaussian Diffusion

no code implementations17 Oct 2024 Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Angela Dai, Matthias Nießner

We propose L3DG, the first approach for generative 3D modeling of 3D Gaussians through a latent 3D Gaussian diffusion formulation.

Scene Generation

ConsistDreamer: 3D-Consistent 2D Diffusion for High-Fidelity Scene Editing

no code implementations CVPR 2024 Jun-Kun Chen, Samuel Rota Bulò, Norman Müller, Lorenzo Porzi, Peter Kontschieder, Yu-Xiong Wang

This paper proposes ConsistDreamer - a novel framework that lifts 2D diffusion models with 3D awareness and 3D consistency, thus enabling high-fidelity instruction-guided scene editing.

ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models

1 code implementation CVPR 2024 Lukas Höllein, Aljaž Božič, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, Matthias Nießner

In this paper, we present a method that leverages pretrained text-to-image models as a prior, and learn to generate multi-view images in a single denoising process from real-world data.

Denoising Image Generation +1

GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields

no code implementations9 Jun 2023 Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner

Neural Radiance Fields (NeRF) have shown impressive novel view synthesis results; nonetheless, even thorough recordings yield imperfections in reconstructions, for instance due to poorly observed areas or minor lighting changes.

3D Scene Reconstruction NeRF +1

DiffRF: Rendering-Guided 3D Radiance Field Diffusion

no code implementations CVPR 2023 Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner

We introduce DiffRF, a novel approach for 3D radiance field synthesis based on denoising diffusion probabilistic models.

Denoising

3D Multi-Object Tracking with Differentiable Pose Estimation

no code implementations28 Jun 2022 Dominik Schmauser, Zeju Qiu, Norman Müller, Matthias Nießner

We propose a novel approach for joint 3D multi-object tracking and reconstruction from RGB-D sequences in indoor environments.

3D Multi-Object Tracking Graph Neural Network +2

AutoRF: Learning 3D Object Radiance Fields from Single View Observations

no code implementations CVPR 2022 Norman Müller, Andrea Simonelli, Lorenzo Porzi, Samuel Rota Bulò, Matthias Nießner, Peter Kontschieder

We introduce AutoRF - a new approach for learning neural 3D object representations where each object in the training set is observed by only a single view.

Novel View Synthesis Object

Seeing Behind Objects for 3D Multi-Object Tracking in RGB-D Sequences

no code implementations CVPR 2021 Norman Müller, Yu-Shiang Wong, Niloy J. Mitra, Angela Dai, Matthias Nießner

From a sequence of RGB-D frames, we detect objects in each frame and learn to predict their complete object geometry as well as a dense correspondence mapping into a canonical space.

3D Multi-Object Tracking Object

Cannot find the paper you are looking for? You can Submit a new open access paper.