no code implementations • 26 Feb 2019 • Georgi Tinchev, Adrian Penate-Sanchez, Maurice Fallon
Localization in challenging, natural environments such as forests or woodlands is an important capability for many applications from guiding a robot navigating along a forest trail to monitoring vegetation growth with handheld sensors.
no code implementations • 10 Dec 2019 • Georgi Tinchev, Adrian Penate-Sanchez, Maurice Fallon
We present SKD, a novel keypoint detector that uses saliency to determine the best candidates from a point cloud for tasks such as registration and reconstruction.
no code implementations • 28 Jan 2020 • Milad Ramezani, Georgi Tinchev, Egor Iuganov, Maurice Fallon
The efficiency of our method comes from carefully designing the network architecture to minimize the number of parameters such that this deep learning method can be deployed in real-time using only the CPU of a legged robot, a major contribution of this work.
no code implementations • 13 Nov 2020 • David Wisth, Marco Camurri, Sandipan Das, Maurice Fallon
True integration of lidar features with standard visual features and IMU is made possible using a subtle passive synchronization of lidar and camera frames.
1 code implementation • 5 Dec 2020 • Siddhant Gangapurwala, Mathieu Geisert, Romeo Orsolino, Maurice Fallon, Ioannis Havoutis
We evaluate the robustness of our method over a wide variety of complex terrains.
no code implementations • 15 Jul 2021 • David Wisth, Marco Camurri, Maurice Fallon
This bias is observable because of the tight fusion of this preintegrated velocity factor with vision, lidar, and IMU factors.
no code implementations • 3 Aug 2021 • Alexander Proudman, Milad Ramezani, Maurice Fallon
While mobile LiDAR sensors are increasingly used to scan in ecology and forestry applications, reconstruction and characterisation are typically carried out offline (to the best of our knowledge).
no code implementations • 13 Sep 2021 • Lintong Zhang, David Wisth, Marco Camurri, Maurice Fallon
We present a multi-camera visual-inertial odometry system based on factor graph optimization which estimates motion by using all cameras simultaneously while retaining a fixed overall feature budget.
no code implementations • 16 Dec 2021 • Lintong Zhang, Marco Camurri, David Wisth, Maurice Fallon
We present a multi-camera LiDAR inertial dataset of 4. 5 km walking distance as an expansion of the Newer College Dataset.
no code implementations • 14 May 2022 • Sandipan Das, Navid Mahabadi, Saikat Chatterjee, Maurice Fallon
We propose a robust curb detection and filtering technique based on the fusion of camera semantics and dense lidar point clouds.
no code implementations • 15 May 2023 • Jonas Frey, Matias Mattamala, Nived Chebrolu, Cesar Cadena, Maurice Fallon, Marco Hutter
We demonstrate the advantages of our approach with experiments and ablation studies in challenging environments in forests, parks, and grasslands.
no code implementations • 26 Sep 2023 • Christina Kassab, Matias Mattamala, Lintong Zhang, Maurice Fallon
We use this representation for flexible room classification and segmentation, serving as a basis for room-centric place recognition.
no code implementations • 11 Mar 2024 • Yifu Tao, Yash Bhalgat, Lanke Frank Tarimo Fu, Matias Mattamala, Nived Chebrolu, Maurice Fallon
We present a neural-field-based large-scale reconstruction system that fuses lidar and vision data to generate high-quality reconstructions that are geometrically accurate and capture photo-realistic textures.
no code implementations • 21 Mar 2024 • Jianeng Wang, Matias Mattamala, Christina Kassab, Lintong Zhang, Maurice Fallon
We demonstrate the system's robustness to the challenges of typical periodic walking gaits, and its ability to construct accurate semantically-rich maps in indoor settings.
no code implementations • 10 Apr 2024 • Matías Mattamala, Jonas Frey, Piotr Libera, Nived Chebrolu, Georg Martius, Cesar Cadena, Marco Hutter, Maurice Fallon
Natural environments such as forests and grasslands are challenging for robotic navigation because of the false perception of rigid obstacles from high grass, twigs, or bushes.
no code implementations • 24 Apr 2024 • Russell Buchanan, S. Jack Tu, Marco Camurri, Stephen J. Mellon, Maurice Fallon
We proposed the Visual-Inertial Odometry (VIO) and the deep learning-based inertial-only odometry methods as alternatives to motion capture for tracking a handheld ultrasound scanner.