3 papers with code • 3 benchmarks • 2 datasets
Given information from a knowledge base, generate a description of this information in natural language.
We show that the PLMs BART and T5 achieve new state-of-the-art results and that task-adaptive pretraining strategies improve their performance even further.
Ranked #1 on KB-to-Language Generation on WebNLG
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.
Ranked #1 on Data-to-Text Generation on WebNLG (using extra training data)