Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas

ACL 2022  ·  Raphael Schumann, Stefan Riezler ·

Vision and language navigation (VLN) is a challenging visually-grounded language understanding task. Given a natural language navigation instruction, a visual agent interacts with a graph-based environment equipped with panorama images and tries to follow the described route. Most prior work has been conducted in indoor scenarios where best results were obtained for navigation on routes that are similar to the training routes, with sharp drops in performance when testing on unseen environments. We focus on VLN in outdoor scenarios and find that in contrast to indoor VLN, most of the gain in outdoor VLN on unseen data is due to features like junction type embedding or heading delta that are specific to the respective environment graph, while image information plays a very minor role in generalizing VLN to unseen outdoor areas. These findings show a bias to specifics of graph representations of urban environments, demanding that VLN tasks grow in scale and diversity of geographical environments.

PDF Abstract ACL 2022 PDF ACL 2022 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Vision and Language Navigation map2seq ORAR + junction type + heading delta Task Completion (TC) 46.7 # 1
Vision and Language Navigation map2seq Gated Attention Task Completion (TC) 17 # 3
Vision and Language Navigation map2seq Rconcat Task Completion (TC) 14.7 # 4
Vision and Language Navigation map2seq ORAR Task Completion (TC) 45.1 # 2
Vision and Language Navigation Touchdown Dataset ORAR Task Completion (TC) 24.2 # 2
Vision and Language Navigation Touchdown Dataset ORAR + junction type + heading delta Task Completion (TC) 29.1 # 1

Methods


No methods listed for this paper. Add relevant methods here