Search Results for author: John McCormac

Found 6 papers, 2 papers with code

InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset

no code implementations3 Sep 2018 Wenbin Li, Sajad Saeedi, John McCormac, Ronald Clark, Dimos Tzoumanikas, Qing Ye, Yuzhong Huang, Rui Tang, Stefan Leutenegger

Datasets have gained an enormous amount of popularity in the computer vision community, from training and evaluation of Deep Learning-based methods to benchmarking Simultaneous Localization and Mapping (SLAM).

Frame Simultaneous Localization and Mapping

Fusion++: Volumetric Object-Level SLAM

no code implementations25 Aug 2018 John McCormac, Ronald Clark, Michael Bloesch, Andrew J. Davison, Stefan Leutenegger

Reconstructed objects are stored in an optimisable 6DoF pose graph which is our only persistent map representation.

Loop Closure Detection

SceneNet RGB-D: Can 5M Synthetic Images Beat Generic ImageNet Pre-Training on Indoor Segmentation?

no code implementations ICCV 2017 John McCormac, Ankur Handa, Stefan Leutenegger, Andrew J. Davison

We compare the semantic segmentation performance of network weights produced from pre-training on RGB images from our dataset against generic VGG-16 ImageNet weights.

Instance Segmentation Object Detection +4

SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth

1 code implementation15 Dec 2016 John McCormac, Ankur Handa, Stefan Leutenegger, Andrew J. Davison

We introduce SceneNet RGB-D, expanding the previous work of SceneNet to enable large scale photorealistic rendering of indoor scene trajectories.

3D Reconstruction Depth Estimation +6

SemanticFusion: Dense 3D Semantic Mapping with Convolutional Neural Networks

no code implementations16 Sep 2016 John McCormac, Ankur Handa, Andrew Davison, Stefan Leutenegger

This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions.


gvnn: Neural Network Library for Geometric Computer Vision

1 code implementation25 Jul 2016 Ankur Handa, Michael Bloesch, Viorica Patraucean, Simon Stent, John McCormac, Andrew Davison

We introduce gvnn, a neural network library in Torch aimed towards bridging the gap between classic geometric computer vision and deep learning.

Image Reconstruction Visual Odometry

Cannot find the paper you are looking for? You can Submit a new open access paper.