Search Results for author: Tristan Laidlow

Found 10 papers, 2 papers with code

Towards the Probabilistic Fusion of Learned Priors into Standard Pipelines for 3D Reconstruction

no code implementations27 Jul 2022 Tristan Laidlow, Jan Czarnowski, Andrea Nicastro, Ronald Clark, Stefan Leutenegger

While systems that pass the output of traditional multi-view stereo approaches to a network for regularisation or refinement currently seem to get the best results, it may be preferable to treat deep neural networks as separate components whose results can be probabilistically fused into geometry-based systems.

3D Reconstruction

DeepFusion: Real-Time Dense 3D Reconstruction for Monocular SLAM using Single-View Depth and Gradient Predictions

no code implementations25 Jul 2022 Tristan Laidlow, Jan Czarnowski, Stefan Leutenegger

While the keypoint-based maps created by sparse monocular simultaneous localisation and mapping (SLAM) systems are useful for camera tracking, dense 3D reconstructions may be desired for many robotic tasks.

3D Reconstruction

Dense RGB-D-Inertial SLAM with Map Deformations

no code implementations22 Jul 2022 Tristan Laidlow, Michael Bloesch, Wenbin Li, Stefan Leutenegger

While dense visual SLAM methods are capable of estimating dense reconstructions of the environment, they suffer from a lack of robustness in their tracking step, especially when the optimisation is poorly initialised.

3D Reconstruction

BodySLAM: Joint Camera Localisation, Mapping, and Human Motion Tracking

no code implementations4 May 2022 Dorian F. Henning, Tristan Laidlow, Stefan Leutenegger

Through a series of experiments on video sequences of human motion captured by a moving monocular camera, we demonstrate that BodySLAM improves estimates of all human body parameters and camera poses when compared to estimating these separately.

Simultaneous Localisation and Mapping with Quadric Surfaces

no code implementations15 Mar 2022 Tristan Laidlow, Andrew J. Davison

Human-made environments contain a lot of structure, and we seek to take advantage of this by enabling the use of quadric surfaces as features in SLAM systems.

ILabel: Interactive Neural Scene Labelling

no code implementations29 Nov 2021 Shuaifeng Zhi, Edgar Sucar, Andre Mouton, Iain Haughton, Tristan Laidlow, Andrew J. Davison

ILabel's underlying model is a multilayer perceptron (MLP) trained from scratch in real-time to learn a joint neural scene representation.

Semantic Segmentation

Coarse-to-Fine Q-attention: Efficient Learning for Visual Robotic Manipulation via Discretisation

1 code implementation CVPR 2022 Stephen James, Kentaro Wada, Tristan Laidlow, Andrew J. Davison

We present a coarse-to-fine discretisation method that enables the use of discrete reinforcement learning approaches in place of unstable and data-inefficient actor-critic methods in continuous robotics domains.

Continuous Control Q-Learning +1

SIMstack: A Generative Shape and Instance Model for Unordered Object Stacks

no code implementations ICCV 2021 Zoe Landgraf, Raluca Scona, Tristan Laidlow, Stephen James, Stefan Leutenegger, Andrew J. Davison

At test time, our model can generate 3D shape and instance segmentation from a single depth view, probabilistically sampling proposals for the occluded region from the learned latent space.

Instance Segmentation Semantic Segmentation

In-Place Scene Labelling and Understanding with Implicit Scene Representation

no code implementations ICCV 2021 Shuaifeng Zhi, Tristan Laidlow, Stefan Leutenegger, Andrew J. Davison

Semantic labelling is highly correlated with geometry and radiance reconstruction, as scene entities with similar shape and appearance are more likely to come from similar classes.

Denoising Super-Resolution

DeepFactors: Real-Time Probabilistic Dense Monocular SLAM

1 code implementation14 Jan 2020 Jan Czarnowski, Tristan Laidlow, Ronald Clark, Andrew J. Davison

The ability to estimate rich geometry and camera motion from monocular imagery is fundamental to future interactive robotics and augmented reality applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.