An Unsupervised Joint System for Text Generation from Knowledge Graphs and Semantic Parsing

Knowledge graphs (KGs) can vary greatly from one domain to another. Therefore supervised approaches to both graph-to-text generation and text-to-graph knowledge extraction (semantic parsing) will always suffer from a shortage of domain-specific parallel graph-text data; at the same time, adapting a model trained on a different domain is often impossible due to little or no overlap in entities and relations. This situation calls for an approach that (1) does not need large amounts of annotated data and thus (2) does not need to rely on domain adaptation techniques to work well in different domains. To this end, we present the first approach to unsupervised text generation from KGs and show simultaneously how it can be used for unsupervised semantic parsing. We evaluate our approach on WebNLG v2.1 and a new benchmark leveraging scene graphs from Visual Genome. Our system outperforms strong baselines for both text$\leftrightarrow$graph conversion tasks without any manual adaptation from one dataset to the other. In additional experiments, we investigate the impact of using different unsupervised objectives.

PDF Abstract EMNLP 2020 PDF EMNLP 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Unsupervised KG-to-Text Generation VG graph-text GT-BT (composed noise) BLEU 23.2 # 1
Unsupervised semantic parsing VG graph-text GT-BT (composed noise) F1 21.7 # 1
Unsupervised KG-to-Text Generation WebNLG v2.1 GT-BT (sampled noise) BLEU 37.7 # 1
Unsupervised semantic parsing WebNLG v2.1 GT-BT (sampled noise) F1 39.1 # 1


No methods listed for this paper. Add relevant methods here