Browse > Robots > Visual Navigation

Visual Navigation

11 papers with code ยท Robots

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Latest papers without code

Bayesian Relational Memory for Semantic Visual Navigation

10 Sep 2019

We introduce a new memory architecture, Bayesian Relational Memory (BRM), to improve the generalization ability for semantic visual navigation agents in unseen environments, where an agent is given a semantic target to navigate towards.

VISUAL NAVIGATION

Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning

4 Sep 2019

An agent solving tasks in a HANNA environment can leverage simulated human assistants, called ANNA (Automatic Natural Navigation Assistants), which, upon request, provide natural language and visual instructions to direct the agent towards the goals.

DECISION MAKING IMITATION LEARNING VISUAL NAVIGATION

Improving Visual Feature Extraction in Glacial Environments

27 Aug 2019

We took a custom camera rig to Igloo Cave at Mt.

VISUAL NAVIGATION

Situational Fusion of Visual Representation for Visual Navigation

24 Aug 2019

A complex visual navigation task puts an agent in different situations which call for a diverse range of visual perception abilities.

VISUAL NAVIGATION

VUSFA:Variational Universal Successor Features Approximator to Improve Transfer DRL for Target Driven Visual Navigation

18 Aug 2019

In this paper, we show how novel transfer reinforcement learning techniques can be applied to the complex task of target driven navigation using the photorealistic AI2THOR simulator.

TRANSFER REINFORCEMENT LEARNING VISUAL NAVIGATION

Vision-based Navigation Using Deep Reinforcement Learning

8 Aug 2019

However, the application of deep RL to visual navigation with realistic environments is a challenging task.

VISUAL NAVIGATION

Visual Navigation by Generating Next Expected Observations

17 Jun 2019

Second, the latent space is modeled with a Mixture of Gaussians conditioned on the current observation and next best action.

VISUAL NAVIGATION

Scene Memory Transformer for Embodied Agents in Long-Horizon Tasks

CVPR 2019

Many robotic applications require the agent to perform long-horizon tasks in partially observable environments.

DECISION MAKING VISUAL NAVIGATION

Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning

CVPR 2019

In this paper we study the problem of learning to learn at both training and test time in the context of visual navigation.

META-LEARNING VISUAL NAVIGATION