Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation

CVPR 2019 Xin WangQiuyuan HuangAsli CelikyilmazJianfeng GaoDinghan ShenYuan-Fang WangWilliam Yang WangLei Zhang

Vision-language navigation (VLN) is the task of navigating an embodied agent to carry out natural language instructions inside real 3D environments. In this paper, we study how to address three critical challenges for this task: the cross-modal grounding, the ill-posed feedback, and the generalization problems... (read more)

PDF Abstract

Evaluation results from the paper


Task Dataset Model Metric name Metric value Global rank Compare
Vision-Language Navigation Room2Room RCM + SIL spl 0.59 # 1