Browse > Robots > Visual Navigation

Visual Navigation

11 papers with code · Robots

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Latest papers with code

SplitNet: Sim2Sim and Task2Task Transfer for Embodied Visual Navigation

18 May 2019facebookresearch/splitnet

We propose SplitNet, a method for decoupling visual perception and policy learning.

VISUAL NAVIGATION

12
18 May 2019

An Open Source and Open Hardware Deep Learning-powered Visual Navigation Engine for Autonomous Nano-UAVs

10 May 2019pulp-platform/pulp-dronet

Nano-size unmanned aerial vehicles (UAVs), with few centimeters of diameter and sub-10 Watts of total power budget, have so far been considered incapable of running sophisticated visual-based autonomous navigation software without external aid from base-stations, ad-hoc local positioning infrastructure, and powerful external computation servers.

AUTONOMOUS NAVIGATION VISUAL NAVIGATION

220
10 May 2019

Scaling and Benchmarking Self-Supervised Visual Representation Learning

3 May 2019facebookresearch/fair_self_supervision_benchmark

Self-supervised learning aims to learn representations from the data itself without explicit manual supervision.

OBJECT DETECTION REPRESENTATION LEARNING VISUAL NAVIGATION

170
03 May 2019

The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation

CVPR 2019 chihyaoma/regretful-agent

As deep learning continues to make progress for challenging perception tasks, there is increased interest in combining vision, language, and decision-making.

DECISION MAKING VISION-LANGUAGE NAVIGATION VISUAL NAVIGATION

65
05 Mar 2019

The Regretful Navigation Agent for Vision-and-Language Navigation

CVPR 2019 (Oral) 2019 chihyaoma/regretful-agent

As deep learning continues to make progress for challenging perception tasks, there is increased interest in combining vision, language, and decision-making.

DECISION MAKING VISION-LANGUAGE NAVIGATION VISUAL NAVIGATION

65
05 Mar 2019

Self-Monitoring Navigation Agent via Auxiliary Progress Estimation

ICLR 2019 chihyaoma/selfmonitoring-agent

The Vision-and-Language Navigation (VLN) task entails an agent following navigational instruction in photo-realistic unknown environments.

NATURAL LANGUAGE VISUAL GROUNDING VISION-LANGUAGE NAVIGATION VISUAL NAVIGATION

66
10 Jan 2019

Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning

CVPR 2019 allenai/savn

In this paper we study the problem of learning to learn at both training and test time in the context of visual navigation.

META-LEARNING VISUAL NAVIGATION

48
03 Dec 2018

Visual Representations for Semantic Target Driven Navigation

15 May 2018tensorflow/models

We propose to using high level semantic and contextual features including segmentation and detection masks obtained by off-the-shelf state-of-the-art vision as observations and use deep network to learn the navigation policy.

DOMAIN ADAPTATION VISUAL NAVIGATION

54,130
15 May 2018

A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

4 May 2018pulp-platform/pulp-dronet

As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average.

AUTONOMOUS NAVIGATION VISUAL NAVIGATION

220
04 May 2018

Cognitive Mapping and Planning for Visual Navigation

CVPR 2017 tensorflow/models

The accumulated belief of the world enables the agent to track visited regions of the environment.

VISUAL NAVIGATION

54,125
13 Feb 2017