Search Results for author: Erik Sandström

Found 6 papers, 4 papers with code

GlORIE-SLAM: Globally Optimized RGB-only Implicit Encoding Point Cloud SLAM

no code implementations28 Mar 2024 Ganlin Zhang, Erik Sandström, Youmin Zhang, Manthan Patel, Luc van Gool, Martin R. Oswald

To alleviate this issue, with the aid of a monocular depth estimator, we introduce a novel DSPO layer for bundle adjustment which optimizes the pose and depth of keyframes along with the scale of the monocular depth.

Simultaneous Localization and Mapping

How NeRFs and 3D Gaussian Splatting are Reshaping SLAM: a Survey

1 code implementation20 Feb 2024 Fabio Tosi, Youmin Zhang, Ziren Gong, Erik Sandström, Stefano Mattoccia, Martin R. Oswald, Matteo Poggi

Over the past two decades, research in the field of Simultaneous Localization and Mapping (SLAM) has undergone a significant evolution, highlighting its critical role in enabling autonomous exploration of unknown environments.

Simultaneous Localization and Mapping

Loopy-SLAM: Dense Neural SLAM with Loop Closures

no code implementations14 Feb 2024 Lorenzo Liso, Erik Sandström, Vladimir Yugay, Luc van Gool, Martin R. Oswald

Neural RGBD SLAM techniques have shown promise in dense Simultaneous Localization And Mapping (SLAM), yet face challenges such as error accumulation during camera tracking resulting in distorted maps.

Simultaneous Localization and Mapping

UncLe-SLAM: Uncertainty Learning for Dense Neural SLAM

1 code implementation19 Jun 2023 Erik Sandström, Kevin Ta, Luc van Gool, Martin R. Oswald

We present an uncertainty learning framework for dense neural simultaneous localization and mapping (SLAM).

Simultaneous Localization and Mapping

Point-SLAM: Dense Neural Point Cloud-based SLAM

2 code implementations ICCV 2023 Erik Sandström, Yue Li, Luc van Gool, Martin R. Oswald

We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD input which anchors the features of a neural scene representation in a point cloud that is iteratively generated in an input-dependent data-driven manner.

Simultaneous Localization and Mapping

Learning Online Multi-Sensor Depth Fusion

1 code implementation7 Apr 2022 Erik Sandström, Martin R. Oswald, Suryansh Kumar, Silvan Weder, Fisher Yu, Cristian Sminchisescu, Luc van Gool

Multi-sensor depth fusion is able to substantially improve the robustness and accuracy of 3D reconstruction methods, but existing techniques are not robust enough to handle sensors which operate with diverse value ranges as well as noise and outlier statistics.

3D Reconstruction Mixed Reality +1

Cannot find the paper you are looking for? You can Submit a new open access paper.