16 papers with code • 11 benchmarks • 9 datasets
Knowledge-graph-to-text (KG-to-text) generation aims to generate high-quality texts which are consistent with input graphs.
Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce.
We show that the PLMs BART and T5 achieve new state-of-the-art results and that task-adaptive pretraining strategies improve their performance even further.
Most previous work on neural text generation from graph-structured data relies on standard sequence-to-sequence methods.
Recent graph-to-text models generate text from graph-based data using either global or local aggregation to learn node representations.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
Previous works on knowledge-to-text generation take as input a few RDF triples or key-value pairs conveying the knowledge of some entities to generate a natural language description.
We propose a knowledge-grounded pre-training (KGPT), which consists of two parts, 1) a general knowledge-grounded generation model to generate knowledge-enriched text.
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG).