Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding

We introduce Room-Across-Room (RxR), a new Vision-and-Language Navigation (VLN) dataset. RxR is multilingual (English, Hindi, and Telugu) and larger (more paths and instructions) than other VLN datasets. It emphasizes the role of language in VLN by addressing known biases in paths and eliciting more references to visible entities. Furthermore, each word in an instruction is time-aligned to the virtual poses of instruction creators and validators. We establish baseline scores for monolingual and multilingual settings and multitask learning when including Room-to-Room annotations. We also provide results for a model that learns from synchronized pose traces by focusing only on portions of the panorama attended to in human demonstrations. The size, scope and detail of RxR dramatically expands the frontier for research on embodied language agents in simulated, photo-realistic environments.

PDF Abstract EMNLP 2020 PDF EMNLP 2020 Abstract

Datasets


Introduced in the Paper:

RxR

Used in the Paper:

test Matterport3D Localized Narratives
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Vision and Language Navigation RxR Monolingual Baseline ndtw 41.05 # 5
Vision and Language Navigation RxR Multilingual Baseline ndtw 36.81 # 6

Methods


No methods listed for this paper. Add relevant methods here