no code implementations • 15 Apr 2025 • Andrea Simonelli, Norman Müller, Peter Kontschieder
The increasing availability of digital 3D environments, whether through image-based 3D reconstruction, generation, or scans obtained by robots, is driving innovation across various applications.
no code implementations • 2 Apr 2025 • Tobias Fischer, Samuel Rota Bulò, Yung-Hsu Yang, Nikhil Varma Keetha, Lorenzo Porzi, Norman Müller, Katja Schwarz, Jonathon Luiten, Marc Pollefeys, Peter Kontschieder
3D Gaussian splatting enables high-quality novel view synthesis (NVS) at real-time frame rates.
no code implementations • 7 Feb 2025 • Ethan Weber, Norman Müller, Yash Kant, Vasu Agrawal, Michael Zollhöfer, Angjoo Kanazawa, Christian Richardt
Our solution is to train a generative model that can consume a large context of input frames while generating unknown target views and recovering image poses when desired.
no code implementations • 13 Dec 2024 • Manuel Dahnert, Angela Dai, Norman Müller, Matthias Nießner
Motivated by the ill-posed nature of the task and to obtain consistent scene reconstruction results, we learn a generative scene prior by conditioning on all scene objects simultaneously to capture the scene context and by allowing the model to learn inter-object relationships throughout the diffusion process.
no code implementations • 4 Dec 2024 • Justin Theiss, Norman Müller, Daeil Kim, Aayush Prakash
Recently, text-to-image generation with diffusion models has made significant advancements in both higher fidelity and generalization capabilities compared to previous baselines.
no code implementations • 17 Oct 2024 • Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Angela Dai, Matthias Nießner
We propose L3DG, the first approach for generative 3D modeling of 3D Gaussians through a latent 3D Gaussian diffusion formulation.
no code implementations • CVPR 2024 • Norman Müller, Katja Schwarz, Barbara Roessle, Lorenzo Porzi, Samuel Rota Bulò, Matthias Nießner, Peter Kontschieder
We introduce MultiDiff, a novel approach for consistent novel view synthesis of scenes from a single RGB image.
no code implementations • CVPR 2024 • Jun-Kun Chen, Samuel Rota Bulò, Norman Müller, Lorenzo Porzi, Peter Kontschieder, Yu-Xiong Wang
This paper proposes ConsistDreamer - a novel framework that lifts 2D diffusion models with 3D awareness and 3D consistency, thus enabling high-fidelity instruction-guided scene editing.
1 code implementation • CVPR 2024 • Lukas Höllein, Aljaž Božič, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, Matthias Nießner
In this paper, we present a method that leverages pretrained text-to-image models as a prior, and learn to generate multi-view images in a single denoising process from real-world data.
no code implementations • 28 Nov 2023 • Zhengming Yu, Zhiyang Dou, Xiaoxiao Long, Cheng Lin, Zekun Li, YuAn Liu, Norman Müller, Taku Komura, Marc Habermann, Christian Theobalt, Xin Li, Wenping Wang
The experiments demonstrate the superior performance of Surf-D in shape generation across multiple modalities as conditions.
no code implementations • 9 Jun 2023 • Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner
Neural Radiance Fields (NeRF) have shown impressive novel view synthesis results; nonetheless, even thorough recordings yield imperfections in reconstructions, for instance due to poorly observed areas or minor lighting changes.
1 code implementation • CVPR 2023 • Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Buló, Norman Müller, Matthias Nießner, Angela Dai, Peter Kontschieder
We propose Panoptic Lifting, a novel approach for learning panoptic 3D volumetric representations from images of in-the-wild scenes.
no code implementations • CVPR 2023 • Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner
We introduce DiffRF, a novel approach for 3D radiance field synthesis based on denoising diffusion probabilistic models.
no code implementations • 28 Jun 2022 • Dominik Schmauser, Zeju Qiu, Norman Müller, Matthias Nießner
We propose a novel approach for joint 3D multi-object tracking and reconstruction from RGB-D sequences in indoor environments.
no code implementations • CVPR 2022 • Norman Müller, Andrea Simonelli, Lorenzo Porzi, Samuel Rota Bulò, Matthias Nießner, Peter Kontschieder
We introduce AutoRF - a new approach for learning neural 3D object representations where each object in the training set is observed by only a single view.
no code implementations • CVPR 2021 • Norman Müller, Yu-Shiang Wong, Niloy J. Mitra, Angela Dai, Matthias Nießner
From a sequence of RGB-D frames, we detect objects in each frame and learn to predict their complete object geometry as well as a dense correspondence mapping into a canonical space.