VNLA

1 papers with code · Computer Vision
Subtask of Robot Navigation

Find objects in photorealistic environments by requesting and executing language subgoals.

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention

CVPR 2019 debadeepta/vnla

We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments.

IMITATION LEARNING VISION-BASED NAVIGATION WITH LANGUAGE-BASED ASSISTANCE VNLA