Visual Navigation

72 papers with code • 5 benchmarks • 15 datasets

Visual Navigation is the problem of navigating an agent, e.g. a mobile robot, in an environment using camera input only. The agent is given a target image (an image it will see from the target position), and its goal is to move from its current position to the target by applying a sequence of actions, based on the camera observations only.

Source: Vision-based Navigation Using Deep Reinforcement Learning

Libraries

Use these libraries to find Visual Navigation models and implementations

Most implemented papers

Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments

peteanderson80/Matterport3DSimulator CVPR 2018

This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering.

Cognitive Mapping and Planning for Visual Navigation

tensorflow/models CVPR 2017

The accumulated belief of the world enables the agent to track visited regions of the environment.

A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

pulp-platform/pulp-dronet 4 May 2018

As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average.

Visual Representations for Semantic Target Driven Navigation

tensorflow/models 15 May 2018

We propose to using high level semantic and contextual features including segmentation and detection masks obtained by off-the-shelf state-of-the-art vision as observations and use deep network to learn the navigation policy.

The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation

chihyaoma/regretful-agent CVPR 2019

As deep learning continues to make progress for challenging perception tasks, there is increased interest in combining vision, language, and decision-making.

Sim2Real Predictivity: Does Evaluation in Simulation Predict Real-World Performance?

facebookresearch/habitat-api 13 Dec 2019

Second, we investigate the sim2real predictivity of Habitat-Sim for PointGoal navigation.

Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning

shamanez/Target-Driven-Visual-Navigation-with-Distributed-PPO 16 Sep 2016

To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine.

Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning

allenai/savn CVPR 2019

In this paper we study the problem of learning to learn at both training and test time in the context of visual navigation.

Self-Monitoring Navigation Agent via Auxiliary Progress Estimation

chihyaoma/selfmonitoring-agent ICLR 2019

The Vision-and-Language Navigation (VLN) task entails an agent following navigational instruction in photo-realistic unknown environments.

Learning Exploration Policies for Navigation

taochenshh/exp4nav ICLR 2019

Numerous past works have tackled the problem of task-driven navigation.