VNLA

1 papers with code • 0 benchmarks • 0 datasets

Find objects in photorealistic environments by requesting and executing language subgoals.

Most implemented papers

Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention

debadeepta/vnla CVPR 2019

We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments.