2 code implementations • 13 Oct 2021 • Darius Rückert, Linus Franke, Marc Stamminger
Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud.
1 code implementation • 8 Nov 2023 • Linus Franke, Darius Rückert, Laura Fink, Matthias Innmann, Marc Stamminger
In our results, we show that our approach can improve the quality of a point cloud obtained by structure from motion and thus increase novel view synthesis quality significantly.
1 code implementation • 28 Nov 2023 • Laura Fink, Darius Rückert, Linus Franke, Joachim Keinert, Marc Stamminger
Based on the RGB-D input stream, novel views are rendered by projecting neural features into the target view via a densely fused depth map and aggregating the features in image-space to a target feature map.
1 code implementation • 11 Jan 2024 • Linus Franke, Darius Rückert, Laura Fink, Marc Stamminger
In this paper, we present TRIPS (Trilinear Point Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP.
no code implementations • 25 Mar 2024 • Florian Hahlbohm, Linus Franke, Moritz Kappel, Susana Castillo, Marc Stamminger, Marcus Magnor
We introduce a new approach for reconstruction and novel-view synthesis of unbounded real-world scenes.