Generating Landmark Navigation Instructions from Maps as a Graph-to-Text Problem

ACL 2021  ·  Raphael Schumann, Stefan Riezler ·

Car-focused navigation services are based on turns and distances of named streets, whereas navigation instructions naturally used by humans are centered around physical objects called landmarks. We present a neural model that takes OpenStreetMap representations as input and learns to generate navigation instructions that contain visible and salient landmarks from human natural language instructions. Routes on the map are encoded in a location- and rotation-invariant graph representation that is decoded into natural language instructions. Our work is based on a novel dataset of 7,672 crowd-sourced instances that have been verified by human navigation in Street View. Our evaluation shows that the navigation instructions generated by our system have similar properties as human-generated instructions, and lead to successful human navigation in Street View.

PDF Abstract ACL 2021 PDF ACL 2021 Abstract

Datasets


Introduced in the Paper:

map2seq

Used in the Paper:

Talk the Walk
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Natural Language Landmark Navigation Instructions Generation map2seq graph2text+pretrain SNT 66.4 # 1

Methods


No methods listed for this paper. Add relevant methods here