12 papers with code • 10 benchmarks • 7 datasets
Knowledge-graph-to-text (KG-to-text) generation aims to generate high-quality texts which are consistent with input graphs.
Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce.
We show that the PLMs BART and T5 achieve new state-of-the-art results and that task-adaptive pretraining strategies improve their performance even further.
Recent graph-to-text models generate text from graph-based data using either global or local aggregation to learn node representations.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
Previous works on knowledge-to-text generation take as input a few RDF triples or key-value pairs conveying the knowledge of some entities to generate a natural language description.
We propose a knowledge-grounded pre-training (KGPT), which consists of two parts, 1) a general knowledge-grounded generation model to generate knowledge-enriched text.
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG).
Existing pre-trained models for knowledge-graph-to-text (KG-to-text) generation simply fine-tune text-to-text pre-trained models such as BART or T5 on KG-to-text datasets, which largely ignore the graph structure during encoding and lack elaborate pre-training tasks to explicitly model graph-text alignments.