Search Results for author: Michael Oechsle

Found 9 papers, 6 papers with code

Gaussians-to-Life: Text-Driven Animation of 3D Gaussian Splatting Scenes

1 code implementation28 Nov 2024 Thomas Wimmer, Michael Oechsle, Michael Niemeyer, Federico Tombari

Our key idea is to leverage powerful video diffusion models as the generative component of our model and to combine these with a robust technique to lift 2D videos into meaningful 3D motion.

Novel View Synthesis

Evolutive Rendering Models

no code implementations27 May 2024 Fangneng Zhan, Hanxue Liang, Yifan Wang, Michael Niemeyer, Michael Oechsle, Adam Kortylewski, Cengiz Oztireli, Gordon Wetzstein, Christian Theobalt

Central to this framework is the development of differentiable versions of these rendering elements, allowing for effective gradient backpropagation from the final rendering objectives.

Splat-SLAM: Globally Optimized RGB-only SLAM with 3D Gaussians

1 code implementation26 May 2024 Erik Sandström, Keisuke Tateno, Michael Oechsle, Michael Niemeyer, Luc van Gool, Martin R. Oswald, Federico Tombari

In response, we propose the first RGB-only SLAM system with a dense 3D Gaussian map representation that utilizes all benefits of globally optimized tracking by adapting dynamically to keyframe pose and depth updates by actively deforming the 3D Gaussian map.

3D Reconstruction Simultaneous Localization and Mapping

RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS

no code implementations20 Mar 2024 Michael Niemeyer, Fabian Manhardt, Marie-Julie Rakotosaona, Michael Oechsle, Daniel Duckworth, Rama Gosula, Keisuke Tateno, John Bates, Dominik Kaeser, Federico Tombari

First, we use radiance fields as a prior and supervision signal for optimizing point-based scene representations, leading to improved quality and more robust optimization.

Novel View Synthesis

Learning Implicit Surface Light Fields

3 code implementations27 Mar 2020 Michael Oechsle, Michael Niemeyer, Lars Mescheder, Thilo Strauss, Andreas Geiger

In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field.

3D Reconstruction Image Generation +1

Texture Fields: Learning Texture Representations in Function Space

no code implementations ICCV 2019 Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, Andreas Geiger

A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques.

Cannot find the paper you are looking for? You can Submit a new open access paper.