Search Results for author: Matteo Vaghi

Found 5 papers, 3 papers with code

Uncertainty-Aware DNN for Multi-Modal Camera Localization

no code implementations2 Nov 2022 Matteo Vaghi, Augusto Luis Ballardini, Simone Fontana, Domenico Giorgio Sorrenti

In the literature, uncertainty estimation in Deep Neural Networks (DNNs) is often performed through sampling methods, such as Monte Carlo Dropout (MCD) and Deep Ensemble (DE), at the expense of undesirable execution time or an increase in hardware resources.

Autonomous Driving Camera Localization +1

LCDNet: Deep Loop Closure Detection and Point Cloud Registration for LiDAR SLAM

1 code implementation8 Mar 2021 Daniele Cattaneo, Matteo Vaghi, Abhinav Valada

Loop closure detection is an essential component of Simultaneous Localization and Mapping (SLAM) systems, which reduces the drift accumulated over time.

Autonomous Driving Loop Closure Detection +2

A Benchmark for Point Clouds Registration Algorithms

1 code implementation28 Mar 2020 Simone Fontana, Daniele Cattaneo, Augusto Luis Ballardini, Matteo Vaghi, Domenico Giorgio Sorrenti

In this way, we are able to cover many kinds of environment and many kinds of sensor that can produce point clouds.

Global visual localization in LiDAR-maps through shared 2D-3D embedding space

no code implementations2 Oct 2019 Daniele Cattaneo, Matteo Vaghi, Simone Fontana, Augusto Luis Ballardini, Domenico Giorgio Sorrenti

In this work we leverage Deep Neural Network (DNN) approaches to create a shared embedding space between images and LiDAR-maps, allowing for image to 3D-LiDAR place recognition.

Autonomous Driving Image to 3D +1

CMRNet: Camera to LiDAR-Map Registration

2 code implementations24 Jun 2019 Daniele Cattaneo, Matteo Vaghi, Augusto Luis Ballardini, Simone Fontana, Domenico Giorgio Sorrenti, Wolfram Burgard

In this paper we present CMRNet, a realtime approach based on a Convolutional Neural Network to localize an RGB image of a scene in a map built from LiDAR data.

Camera Localization

Cannot find the paper you are looking for? You can Submit a new open access paper.